My training dataset has about 200,000 records and I have 500 features. (These are sales data from a retail org). Most of the features are 0/1 and is stored as a sparse matrix.
The goal is to predict the probability to buy for about 200 products. So, I would need to use the same 500 features to predict the probability of purchase for 200 products. Since glmnet is a natural choice for model creation, I thought about implementing glmnet in parallel for the 200 products. (Since all the 200 models are independent) But I am stuck using foreach. The code I executed was:
foreach(i = 1:ncol(target)) %dopar%
{
assign(model[i],cv.glmnet(x,target[,i],family="binomial",alpha=0,type.measure="auc",grouped=FALSE,standardize=FALSE,parallel=TRUE))
}
model is a list - having the list of 200 model names where I want to store the respective models.
The following code works. But it doesn't exploit the parallel structure and takes about a day to finish !
for(i in 1:ncol(target))
{ assign(model[i],cv.glmnet(x,target[,i],family="binomial",alpha=0,type.measure="auc",grouped=FALSE,standardize=FALSE,parallel=TRUE))
}
Can someone point to me on how to exploit the parallel structure in this case?
Stumbled upon this old thread and thought it would be useful to mention that with the future framework, it is possible to do nested and parallel
foreach()
calls. For instance, assume you have three local machines (which SSH access) and you want to run four cores on each, then you can use:The "outer" foreach-loop will iterate over the targets such that each iteration is processed by a separate machine. Each iteration will in turn process
cv.glmnet()
using four workers on whatever machine it ends up on.(Of course, if you only got access to a single machine, then it makes little sense to do nested parallel processing. I such cases, you can use:
to parallelize the
cv.glmnet()
call, or alternatively,, or equivalently just
plan(multiprocess, workers = 4L)
, to parallelize over targets.In order to execute "cv.glmnet" in parallel, you have to specify the
parallel=TRUE
option, and register a foreach parallel backend. This allows you to choose the parallel backend that works best for your computing environment.Here's the documentation for the "parallel" argument from the cv.glmnet man page:
Here's an example using the doParallel package which works on Windows, Mac OS X, and Linux:
This call to cv.glmnet will execute in parallel using four workers. On Linux and Mac OS X, it will execute the tasks using "mclapply", while on Windows it will use "clusterApplyLB".
Nested parallelism gets tricky, and may not help a lot with only 4 workers. I would try using a normal for loop around cv.glmnet (as in your second example) with a parallel backend registered and see what the performance is before adding another level of parallelism.
Also note that the assignment to "model" in your first example isn't going to work when you register a parallel backend. When running in parallel, side-effects generally get thrown away, as with most parallel programming packages.