I was unhappy with the time dplyr and data.table were taking to create a new variable on my data.frame and decide to compare methods.
To my surprise, reassigning the results of dplyr::mutate() to a new data.frame seems to be faster than not doing so.
Why is this happening?
library(data.table)
library(tidyverse)
dt <- fread(".... data.csv") #load 200MB datafile
dt1 <- copy(dt)
dt2 <- copy(dt)
dt3 <- copy(dt)
a <- Sys.time()
dt1[, MONTH := month(as.Date(DATE))]
b <- Sys.time(); datatabletook <- b-a
c <- Sys.time()
dt_dplyr <- dt2 %>%
mutate(MONTH = month(as.Date(DATE)))
d <- Sys.time(); dplyr_reassign_took <- d - c
e <- Sys.time()
dt3 %>%
mutate(MONTH = month(as.Date(DATE)))
f <- Sys.time(); dplyrtook <- f - e
datatabletook = 17sec
dplyrtook = 47sec
dplyr_reassign_took = 17sec
There are a couple ways to benchmark with base R:
.t0 <- Sys.time()
...
.t1 <- Sys.time()
.t1 - t0
# or
system.time({
...
})
With the Sys.time
way, you're sending each line to the console and may see some return value printed for each line, as @Axeman suggested. With {...}
, there is only one return value (the last result inside the braces) and system.time
will suppress it from printing.
If the printing is costly enough but is not part of what you want to measure, it can make a difference.
There are good reasons to prefer system.time
over Sys.time
for benchmarking; from @MattDowle's comment:
i) it does a gc first excluded from the timing to isolate from random gc's and
ii) it includes user
and sys
time as well as elapsed
wall clock time.
The Sys.time()
way will be affected by reading your email in Chrome or using Excel while the test runs, the system.time()
way won't so long as you use the user
and sys
parts of the result.