I am trying find a more efficient way to count the number of cases that are open as of the creation time of each case. A case is "open" between its creation date/time stamp and its censor date/time stamp. You can copy-paste the code below to view a simple functional example:
# Create a bunch of date/time stamps for our example
two_thousand <- as.POSIXct("2000-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_one <- as.POSIXct("2001-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_two <- as.POSIXct("2002-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_three <- as.POSIXct("2003-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_four <- as.POSIXct("2004-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_five <- as.POSIXct("2005-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_six <- as.POSIXct("2006-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_seven <- as.POSIXct("2007-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_eight <- as.POSIXct("2008-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_nine <- as.POSIXct("2009-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_ten <- as.POSIXct("2010-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
two_thousand_eleven <- as.POSIXct("2011-01-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand <- as.POSIXct("2000-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_one <- as.POSIXct("2001-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_mid_two <- as.POSIXct("2002-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_three <- as.POSIXct("2003-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_four <- as.POSIXct("2004-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_five <- as.POSIXct("2005-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_six <- as.POSIXct("2006-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_seven <- as.POSIXct("2007-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_eight <- as.POSIXct("2008-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_nine <- as.POSIXct("2009-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_ten <- as.POSIXct("2010-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
mid_two_thousand_eleven <- as.POSIXct("2011-06-01 00:00:00", format="%Y-%m-%d %H:%M:%S", tz="UTC", origin="1970-01-01");
# Create a table that has pairs of created & censored date/time stamps for cases, indicating the range during which each case is "open"
comparison_table <- data.table(id = 1:10,
created = c(two_thousand, two_thousand_two, two_thousand_four, two_thousand_six, two_thousand_eight, two_thousand_ten, two_thousand, two_thousand_six, two_thousand_three, two_thousand_one),
censored = c(two_thousand_one, two_thousand_three, two_thousand_five, two_thousand_seven, two_thousand_nine, two_thousand_eleven, two_thousand_five, two_thousand_ten, two_thousand_eight, two_thousand_four));
# Create a table that has the creation date/time stamps at which we want to count all the open cases
check_table <- data.table(id = 1:12,
creation = c(mid_two_thousand, mid_two_thousand_one, mid_two_thousand_mid_two, mid_two_thousand_three, mid_two_thousand_four, mid_two_thousand_five, mid_two_thousand_six, mid_two_thousand_seven, mid_two_thousand_eight, mid_two_thousand_nine, mid_two_thousand_ten, mid_two_thousand_eleven));
# I use the DPLYR library as the group_by() + summarize() functions make this operation simple
library(dplyr);
# Group by id to set parameter for summarize() function
check_table_grouped <- group_by(check_table, id);
# For each id in the table, sum the number of times that its creation date/time stamp is greater than the creation date/time and less than the censor date/time of all cases in the comparison table
# EDIT: Also added timing to compare with method below
system.time(check_table_summary <- summarize(check_table_grouped, other_open_values_at_creation_count = sum((comparison_table$created < creation & comparison_table$censored > creation))));
# Result is as desired
check_table_summary;
# EDIT: Added @David-arenburg's solution with timing
library(data.table);
setDT(check_table)[, creation2 := creation];
setkey(comparison_table, created, censored);
system.time(foverlaps_table <- foverlaps(check_table, comparison_table, by.x = c("creation", "creation2"))[, sum(!is.na(id)), by = i.id]);
# Same results as above
foverlaps_table;
This approach works fine for small data sets like the one in this example. However, even though I'm using vectorized operations, the computation time grows exponentially because the operation count is: (3 * nrow comparisons) * (nrow sum(nrow) calculations). At nrow=10,000, time is around 14s, at nrow=100,000, time is > 20 minutes. My actual nrow is ~ 1,000,000.
Is there a more efficient way to do this calculation? I'm currently looking into multicore options, but even those will only linearly reduce the execution time. Your help is appreciated. Thanks!
EDIT: Added @David-arenburg's data.table::foverlaps
solution, which also works and is faster for nrow < 1000. However, it is slower than the summarize
solution for larger numbers of rows. At 10,000 rows it was twice as long. At 50,000 rows, I gave up waiting after 10x as long. Interestingly, the foverlaps
solution doesn't seem to trigger automatic garbage collection, so constantly sits at max RAM (64GB on my system) whereas the summarize
solution periodically triggers the automatic garbage collection, so never exceeds ~ 40GB of RAM. I'm unsure if this is related to the speed differences.
FINAL EDIT: I have re-written the question in a way that makes it much easier for respondents to generate large tables with suitable created/censored dateTimes. I've also simplified and explained the problem more clearly, making it clear that the lookup table is very large (violating data.table::foverlaps
assumptions). I've even built in timing comparison to make it super simple to test with large cases! Details here: Efficient method for counting open cases at time of each case's submission in large data set
Thanks again for your help everyone! :)