HI All, I'm new to R.
I have two panel data files, with columns "id", "date" and "ret"
file A has a lot more data than file B, but i'm primarily working with file B data.
Combination of "id" and "date" is unqiue indentifier.
Is there an elegent way of looking up for each (id, date) in B, I need to get the past 10 days ret from file A, and store them back into B?
my naive way of doing it is to loop for all rows in B,
for i in 1:length(B) {
B$past10d[i] <- prod(1+A$ret[which(A$id == B$id[i] & A$date > B$date[i]-10 & A$date < B$date[i])])-1
}
but the loops takes forever.
Really appreciate your thoughts.
Thank you very much.
Given that you're having memory issues perhaps paring down A first might help. First, get rid of extraneous ids.
Reducing the A dataset completely still wants to grab more memory. It's not possible without storing some variables. Nevertheless, we can get rid of a bunch of it I'm hoping by lopping off every date below our absolute minimum and above our absolute maximum.
Of course, by not qualifying this by id we haven't get the smallest version of A possible but hopefully it's enough smaller.
Now run the code I first proposed and see if you still have a memory error
I think the key is to vectorize and use the
%in%
operator to subset data frameA
. And, I know, prices are not random numbers, but I didn't want to code a random walk... I created a stock-date index usingpaste
, but I'm sure you could use the index frompdata.frame
in theplm
library, which is the best I've found for panel data.This answer builds on Richards answer but is more targeted to the question.
Key idea is to build one vector of id date combinations to compare against. This happens in the second code block.
My solution uses the data.table package but should with some syntax changes work with a data.frame. But using the data.table package has the advantage of keycolumns.
If you still have trouble you can pair this approach with john's second answer and first crop A.
Did you try ?merge ?
"Merge two data frames by common columns or row names, or do other versions of database join operations. "
Besides I suggest to use a little local MySQL / PostgreSQL (RMySQL / RPostgreSQL) database if you continously sport composite PKs or whatsoever as unique identifiers. To me SQL rearranging of data and afterwards using data.frames from view is a lot easier than looping.
Is this any faster? (I am assuming the combination of B$id and B$date is a unique identifier not replicated anywhere - implied by your code)
If you haven't got data that is replicated in both A and B, then
rbind
is the simplest solution.