I have a very large list X
and a vectorized function f
. I want to calculate f(X)
, but this will take a long time if I do it with a single core. I have (access to) a 48-core server. What is the easiest way to parallelize the calculation of f(X)
? The following is not the right answer:
library(foreach)
library(doMC)
registerDoMC()
foreach(x=X, .combine=c) %dopar% f(x)
The above code will indeed parallelize the calculation of f(X)
, but it will do so by applying f
separately to every element of X
. This ignores the vectorized nature of f
and will probably make things slower as a result, not faster. Rather than applying f
elementwise to X
, I want to split X
into reasonably-sized chunks and apply f
to those.
So, should I just manually split X
into 48 equal-sized sublists and then apply f
to each in parallel, then manually put together the result? Or is there a package designed for this?
In case anyone is wondering, my specific use case is here.
Here's my implementation. It's a function
chunkmap
that takes a vectorized function, a list of arguments that should be vectorized, and a list of arguments that should not be vectorized (i.e. constants), and returns the same result as calling the function on the arguments directly, except that the result is calculated in parallel. For a functionf
, vector argumentsv1
,v2
,v3
, and scalar argumentss1
,s2
, the following should return identical results:Since it is impossible for the
chunkapply
function to know which arguments off
are vectorized and which are not, it is up to you to specify when you call it, or else you will get the wrong results. You should generally name your arguments to ensure that they get bound correctly.Here are some examples showing that
chunkapply(f,list(x))
produces identical results tof(x)
. I have set the max.chunk.size extremely small to ensure that the chunking algorithm is actually used.If anyone has a better name than "chunkapply", please suggest it.
Edit:
As another answer points out, there is a function called
pvec
in the multicore pacakge that has very similar functionality to what I have written. For simple cases, you should us that, and you should vote up Jonas Rauch's answer for it. However, my function is a bit more general, so if any of the following apply to you, you might want to consider using my function instead:pvec
only vectorizes over a single argument, so you couldn't easily implement parallel vectorized addition withpvec
, for example. My function allows you to specify arbitrary arguments.Map-Reduce might be what you're looking for; it's been ported to R
How about something like this? R will take advantage of all the available memory and
multicore
will parallelize over all available cores.The itertools package was designed to address this kind of problem. In this case, I would use
isplitVector
:For this example,
pvec
is undoubtably faster and simpler, but this can be used on Windows with the doParallel package, for example.Although this is an older question this might be interesting for everyone who stumbled upon this via google (like me): Have a look at the
pvec
function in themulticore
package. I think it does exactly what you want.