Help me replace a for loop with an “apply” functio

2020-07-17 06:07发布

问题:

...if that is possible

My task is to find the longest streak of continuous days a user participated in a game.

Instead of writing an sql function, I chose to use the R's rle function, to get the longest streaks and then update my db table with the results.

The (attached) dataframe is something like this:

    day      user_id
2008/11/01    2001
2008/11/01    2002
2008/11/01    2003
2008/11/01    2004
2008/11/01    2005
2008/11/02    2001
2008/11/02    2005
2008/11/03    2001
2008/11/03    2003
2008/11/03    2004
2008/11/03    2005
2008/11/04    2001
2008/11/04    2003
2008/11/04    2004
2008/11/04    2005

I tried the following to get per user longest streak

# turn it to a contingency table
my_table <- table(user_id, day)

# get the streaks
rle_table <- apply(my_table,1,rle)

# verify the longest streak of "1"s for user 2001
# as.vector(tapply(rle_table$'2001'$lengths, rle_table$'2001'$values, max)["1"])

# loop to get the results
# initiate results matrix
res<-matrix(nrow=dim(my_table)[1], ncol=2)

for (i in 1:dim(my_table)[1]) {
string <- paste("as.vector(tapply(rle_table$'", rownames(my_table)[i], "'$lengths, rle_table$'", rownames(my_table)[i], "'$values, max)['1'])", sep="")
res[i,]<-c(as.integer(rownames(my_table)[i]) , eval(parse(text=string)))
}

Unfortunately this for loop takes too long and I' wondering if there is a way to produce the res matrix using a function from the "apply" family.

Thank you in advance

回答1:

The apply functions are not always (or even generally) faster than a for loop. That is a remnant of R's associate with S-Plus (in the latter, apply is faster than for). One exception is lapply, which is frequently faster than for (because it uses C code). See this related question.

So you should use apply primarily to improve the clarity of code, not to improve performance.

You might find Dirk's presentation on high-performance computing useful. One other brute force approach is "just-in-time compilation" with Ra instead of the normal R version, which is optimized to handle for loops.

[Edit:] There are clearly many ways to achieve this, and this is by no means better even if it's more compact. Just working with your code, here's another approach:

dt <- data.frame(table(dat))[,2:3]
dt.b <- by(dt[,2], dt[,1], rle)
t(data.frame(lapply(dt.b, function(x) max(x$length))))

You would probably need to manipulate the output a little further.



回答2:

EDIT: Fixed. I originally assumed that I would have to modify most of rle(), but it turns out only a few tweaks were needed.

This isn't an answer about an *apply method, but I wonder if this might not be a faster approach to the process overall. As Shane says, loops aren't so bad. And... I rarely get to show my code to anyone, so I'd be happy to hear some critique of this.

#Shane, I told you this was awesome
dat <- getSOTable("http://stackoverflow.com/questions/1504832/help-me-replace-a-for-loop-with-an-apply-function", 1)
colnames(dat) <- c("day", "user_id")
#Convert to dates so that arithmetic works properly on them
dat$day <- as.Date(dat$day)

#Custom rle for dates
rle.date <- function (x)
{
    #Accept only dates
    if (class(x) != "Date")
        stop("'x' must be an object of class \"Date\"")
    n <- length(x)
    if (n == 0L)
        return(list(lengths = integer(0L), values = x))
    #Dates need to be sorted
    x.sort <- sort(x)
    #y is a vector indicating at which indices the date is not consecutive with its predecessor
    y <- x.sort[-1L] != (x.sort + 1)[-n]
    #i returns the indices of y that are TRUE, and appends the index of the last value
    i <- c(which(y | is.na(y)), n)
    #diff tells you the distances in between TRUE/non-consecutive dates. max gets the largest of these.
    max(diff(c(0L, i)))
}

#Loop
max.consec.use <- matrix(nrow = length(unique(dat$user_id)), ncol = 1)
rownames(max.consec.use) <- unique(dat$user_id)

for(i in 1:length(unique(dat$user_id))){
    user <- unique(dat$user_id)[i]
    uses <- subset(dat, user_id %in% user)
    max.consec.use[paste(user), 1] <- rle.date(uses$day)
}

max.consec.use


回答3:

another option

# convert to Date
day_table$day <- as.Date(day_table$day, format="%Y/%m/%d")
# split by user and then look for contiguous days
contig <- sapply(split(day_table$day, day_table$user_id), function(.days){
    .diff <- cumsum(c(TRUE, diff(.days) != 1))
    max(table(.diff))
})


回答4:

If you've got a really long list of data than it sounds like maybe a clustering problem. Each cluster would be defined by a user and dates with a maximum separation distance of one. Then retrieve the largest cluster by user. I'll edit this if I think of a specific method.



回答5:

This was Chris's suggestion for how to get the data:

dat <- read.table(textConnection(
 "day      user_id
 2008/11/01    2001
 2008/11/01    2002
 2008/11/01    2003
 2008/11/01    2004
 2008/11/01    2005
 2008/11/02    2001
 2008/11/02    2005
 2008/11/03    2001
 2008/11/03    2003
 2008/11/03    2004
 2008/11/03    2005
 2008/11/04    2001
 2008/11/04    2003
 2008/11/04    2004
 2008/11/04    2005
 "), header=TRUE)