可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I've got a piece of aggregation code that works well enough but runs a bit slow against a data frame with 10e6 rows. I'm not that experienced in R so apologies for my cringe worthy code!
I just want to do a basic roll up and sum of values for a common key...
eg go from...
key val
1 a 5
2 b 7
3 a 6
to...
key val
1 a 11
2 b 7
the best i can manage is...
keys = unique(inp$key)
vals = sapply(keys, function(x) { sum(inp[inp$key==x,]$val) })
out = data.frame(key=keys, val=vals)
I have this gut feel that the inp[inp$key==x,]
is not the best way. Is there an obvious speed up i'm missing? I can do it in Hadoop (since the 10e6 dataset is actually already a rollup from a 2e9 row dataset) but I'm trying to improve my R.
Cheers,
Mat
回答1:
Another option using tapply
:
dat <- data.frame(key = c('a', 'b', 'a'), val = c(5,7,6))
> with(dat, tapply(val, key, FUN = sum))
a b
11 7
My tests indicate this is the fastest method for this particular exercise, obviously your miles may vary:
fn.tapply <- function(daters) with(daters, tapply(val, key, FUN = sum))
fn.aggregate <- function(daters) aggregate(val~key, sum, data = daters)
fn.ddply <- function(daters) ddply(daters, .(key), summarize, val = sum(val))
library(rbenchmark)
benchmark(fn.tapply(dat), fn.aggregate(dat), fn.ddply(dat)
, columns = c("test", "elapsed", "relative")
, order = "relative"
, replications = 100
)
test elapsed relative
1 fn.tapply(dat) 0.03 1.000000
2 fn.aggregate(dat) 0.20 6.666667
3 fn.ddply(dat) 0.30 10.000000
Note that converting the tapply
solution into a data.frame cut this difference in half by ~40% for a true apples to apples comparison to the first two.
Using a 1M row dataset as indicated in the comments does seem to change things a bit:
dat2 <- data.frame(key = rep(letters[1:5], each = 200000), val = runif(1e6))
> benchmark(fn.tapply(dat2), fn.aggregate(dat2), fn.ddply(dat2)
+ , columns = c("test", "elapsed", "relative")
+ , order = "relative"
+ , replications = 100
+ )
test elapsed relative
1 fn.tapply(dat2) 39.114 1.000000
3 fn.ddply(dat2) 62.178 1.589661
2 fn.aggregate(dat2) 157.463 4.025745
回答2:
you can use aggregate
:
> d
key val
1 a 5
2 b 7
3 a 6
> aggregate(val~key, sum, data=d)
key val
1 a 11
2 b 7
you can also use ddply
from hadley's plyr package:
> ddply(d, .(key), summarize, val=sum(val))
key val
1 a 11
2 b 7
回答3:
Using sapply
and split
is another option. I'll extend via the data and benchmarks from @Chase's excellent answer.
fn.tapply <- function(daters) with(daters, tapply(val, key, FUN = sum))
fn.split <- function(daters) with(daters, sapply(split(val, key), sum))
str(dat)
# 'data.frame': 1000000 obs. of 2 variables:
# $ key: Factor w/ 5 levels "a","b","c","d",..: 1 1 1 1 1 1 1 1 1 1 ...
# $ val: num 0.186 0.875 0.42 0.294 0.878 ...
benchmark(fn.tapply(dat), fn.split(dat)
, columns = c("test", "elapsed", "relative")
, order = "relative"
, replications = 100
)
# test elapsed relative
# 2 fn.split(dat) 4.106 1.00000
# 1 fn.tapply(dat) 69.982 17.04384
回答4:
Technically you said "data frame", but another much faster option (over 22x faster) is to use the data.table
package.
Here's a benchmark for 10 reps of 10e6 rows.
library(rbenchmark)
library(plyr)
library(data.table)
key <- sample(letters,10e6, replace = T)
val <- sample.int(10,10e6, replace = T)
big_df <- data.frame(key,val)
rm(key)
rm(val)
big_dtable <- data.table(big_df)
setkey(big_dtable,key)
fn.data_table <- function(mydata) mydata[,list(sum = sum(val)), by = 'key']
fn.tapply <- function(mydata) with(mydata, tapply(val, key, FUN = sum))
fn.aggregate <- function(mydata) aggregate(val~key, sum, data = mydata)
fn.ddply <- function(mydata) ddply(mydata, .(key), summarize, val = sum(val))
And now the benchmark....
benchmark(fn.data_table(big_dtable)
, fn.tapply(big_df)
, fn.aggregate(big_df)
, fn.ddply(big_df)
, fn.ddply(big_dtable)
, columns = c("test","elapsed","relative")
, order = "relative"
, replications = 10
)
And the result....
test elapsed relative
1 fn.data_table(big_dtable) 1.98 1.00000
5 fn.ddply(big_dtable) 37.59 18.98485
4 fn.ddply(big_df) 44.36 22.40404
2 fn.tapply(big_df) 51.03 25.77273
3 fn.aggregate(big_df) 238.52 120.46465