Fast alternative to split in R

2020-02-13 09:38发布

问题:

I'm partitioning a data frame with split() in order to use parLapply() to call a function on each partition in parallel. The data frame has 1.3 million rows and 20 cols. I'm splitting/partitioning by two columns, both character type. Looks like there are ~47K unique IDs and ~12K unique codes, but not every pairing of ID and code are matched. The resulting number of partitions is ~250K. Here is the split() line:

 system.time(pop_part <- split(pop, list(pop$ID, pop$code)))

The partitions will then be fed into parLapply() as follows:

cl <- makeCluster(detectCores())
system.time(par_pop <- parLapply(cl, pop_part, func))
stopCluster(cl)

I've let the split() code alone run almost an hour and it doesn't complete. I can split by the ID alone, which takes ~10 mins. Additionally, R studio and the worker threads are consuming ~6GB of RAM.

The reason I know the resulting number of partitions is I have equivalent code in Pentaho Data Integration (PDI) that runs in 30 seconds (for the entire program, not just the "split" code). I'm not hoping for that type of performance with R, but something that perhaps completes in 10 - 15 mins worst case.

The main question: Is there a better alternative to split? I've also tried ddply() with .parallel = TRUE, but it also ran over an hour and never completed.

回答1:

Split indexes into pop

idx <- split(seq_len(nrow(pop)), list(pop$ID, pop$code))

Split is not slow, e.g.,

> system.time(split(seq_len(1300000), sample(250000, 1300000, TRUE)))
   user  system elapsed 
  1.056   0.000   1.058 

so if yours is I guess there's some aspect of your data that slows things down, e.g., ID and code are both factors with many levels and so their complete interaction, rather than the level combinations appearing in your data set, are calculated

> length(split(1:10, list(factor(1:10), factor(10:1))))
[1] 100
> length(split(1:10, paste(letters[1:10], letters[1:10], sep="-")))
[1] 10

or perhaps you're running out of memory.

Use mclapply rather than parLapply if you're using processes on a non-Windows machine (which I guess is the case since you ask for detectCores()).

par_pop <- mclapply(idx, function(i, pop, fun) fun(pop[i,]), pop, func)

Conceptually it sounds like you're really aiming for pvec (distribute a vectorized calculation over processors) rather than mclapply (iterate over individual rows in your data frame).

Also, and really as the initial step, consider identifying the bottle necks in func; the data is large but not that big so perhaps parallel evaluation is not needed -- maybe you've written PDI code instead of R code? Pay attention to data types in the data frame, e.g., factor versus character. It's not unusual to get a 100x speed-up between poorly written and efficient R code, whereas parallel evaluation is at best proportional to the number of cores.



回答2:

Split(x,f) is slow if x is a factor AND f contains a lot of different elements

So, this code if fast:

system.time(split(seq_len(1300000), sample(250000, 1300000, TRUE)))

But, this is very slow:

system.time(split(factor(seq_len(1300000)), sample(250000, 1300000, TRUE)))

And this is fast again because there are only 25 groups

system.time(split(factor(seq_len(1300000)), sample(25, 1300000, TRUE)))