I have a timestamp in one data frame that I am trying to match to the closest timestamp in a second dataframe, for the purpose of extracting data from the second dataframe. See below for a generic example of my approach:
library(lubridate)
data <- data.frame(datetime=ymd_hms(c('2015-04-01 12:23:00 UTC', '2015-04-01 13:49:00 UTC', '2015-04-01 14:06:00 UTC' ,'2015-04-01 14:49:00 UTC')),
value=c(1,2,3,4))
reference <- data.frame(datetime=ymd_hms(c('2015-04-01 12:00:00 UTC', '2015-04-01 13:00:00 UTC', '2015-04-01 14:00:00 UTC' ,'2015-04-01 15:00:00 UTC', '2015-04-01 16:00:00 UTC')),
refvalue=c(5,6,7,8,9))
data$refvalue <- apply(data, 1, function (x){
differences <- abs(as.numeric(difftime(ymd_hms(x['datetime']), reference$datetime)))
mindiff <- min(differences)
return(reference$refvalue[differences == mindiff])
})
data
# datetime value refvalue
# 1 2015-04-01 12:23:00 1 5
# 2 2015-04-01 13:49:00 2 7
# 3 2015-04-01 14:06:00 3 7
# 4 2015-04-01 14:49:00 4 8
This works fine, except it is very slow, because the reference dataframe is quite large in my real-world application. Is this code properly vectorized? Is there a faster, more elegant way of performing this operation?
I wondered if this would be able to match a data.table solution for speed, but it's a base-R vectorized solution which should outperform your
apply
version. And since it doesn't actually ever calculate a distance, it might actually be faster than the data.table-nearest approach. This adds the length of the midpoints of the intervals to either the lowest possible value or the starting point of the the intervals to create a set of "mid-breaks" and then uses thefindInterval
function to process the times. That creates a suitable index into the rows of thereference
dataset and the "refvalue" can then be "transferred" to thedata
-object.You can try
data.table
s rolling join using the "nearest" option