Trying to fit BLR model to each column in data frame, and then predict on new data pts. Have a lot of columns, so cannot identify the columns by name, only column number. Having reviewed the several examples of similar nature on this site, cannot figure out why this does not work.
df <- data.frame(x1 = runif(1000, -10, 10),
x2 = runif(1000, -2, 2),
x3 = runif(1000, -5, 5),
y = rbinom(1000, size = 1, prob = 0.40))
for (i in 1:length(df)-1)
{
fit <- glm (y ~ df[,i], data = df, family = binomial, na.action = na.exclude)
new_pts <- data.frame(seq(min(df[,i], na.rm = TRUE), max(df[,i], na.rm = TRUE), len = 200))
names(new_pts) <- names(df[, i])
new_pred <- predict(fit, newdata = new_pts, type = "response")
}
The predict()
function raises warning message and returns array 1000 elements long, whereas the test data has only 200 elements.
Warning message : Warning message: 'newdata' has 200 lines bu the variables found have 1000 lines
The answer above does a great job. Here is another option for this sort of thing. First we take the dataframe from wide to long, next we nest the data by group, then we run a model per group, lastly we map out the predicted values from the models and unnest our dataframe. I plotted the predicted values to show that you get a reasonable result. Note that before we unnest the data, we keep the model within the dataframe and we can extract other information we need as well before unnesting.
EDIT
Here we will keep the models in the data frame and then pull out extra info.
Now we can pull out the other info we are interested in
Then we just unnest to pull out the values, but keep the rest of the info nested.
Another option is to make a function that maps all of the data that we want into a nested list and then we can pull out the elements we want as we need them
and if we want to pull out the predicted values
For repeated modelling I use a similar approach as shown below. I have implemented it with
data.table
, but it could be rewritten to use the basedata.frame
(the code would then be more verbose, I guess). In this approach I store all the models in a separate object (below I have provided two versions of the code, one more explanatory part, and one more advanced aiming at a clean output).Of course, you could also write a loop/function that only fits one model per iteration without storing them. From my perspective, its a good idea to save the models, since you probably will have to investigate the models for robustness, etc. and not only predict new values.
HINT: Please also have a look at the answer of @AndS. providing a tidyverse approach. Together with this answer, I think, this is certainly a nice side by side comparison for learning/understanding data.table and tidyverse approaches