table1 <- data.frame(user_id=c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2),
product_id = c(14, 24, 38, 40, 66, 2, 19, 30, 71, 98, 7, 16),
first_order = c(1, 2, 1, 4, 5, 3, 2, 4, 2, 4, 2, 3),
last_order = c(4, 7, 5, 8, 8, 3, 4, 7, 5, 9, 4, 5))
table2 <- data.frame(user_id=c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2),
order_number=c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6),
days_cumsum = c(0, 7, 15, 26, 34, 43, 53, 59, 66, 74, 82, 91, 5, 11, 17, 24, 29, 35))
I want to add new feature to table1 using table2. New feature is order interval of each product by each user.
For example, let's see the table1. First row has (user_id==1), (product_id==14), (first order==1) and (last order ==4). It means product 14 was ordered at order 1, 2, 3, 4. And we can find this order number in table 2. New feature is about order interval between first order and last order. We can derive it in table 2, using "days_cumsum" which is cumulative sum of days after prior order. First row's new feature value is 26(=26-0).
I think it may be done with join, but I can't use join because each table is very big actually.
So I'm using this function below with for loop :
f <- function(i){
a <- table2 %>%
filter(user_id==table1[i, 1]) %>%
filter(order_number==table1[i, 3] | order_number==table1[i, 4])
ifelse(nrow(a)==2, a[2, 3] - a[1, 3], 999999) # first_order==last_order
}
It calculates each new feature value row by row, but it is very slow and needs many calculation. I often encounter this problem(make new feature using two tables), but every time I have difficulties.
Is there better code? I'm waiting for your help.
Can you share the runtime/computation time comparisons for
join
and the version using your loop?Below is a solution using joins.
which gives:
For comparaison, some solutions using
data.table
."piped-like" version without renaming
using reshaping for one merge only