Tricks to manage the available memory in an R sess

2018-12-31 04:20发布

What tricks do people use to manage the available memory of an interactive R session? I use the functions below [based on postings by Petr Pikal and David Hinds to the r-help list in 2004] to list (and/or sort) the largest objects and to occassionally rm() some of them. But by far the most effective solution was ... to run under 64-bit Linux with ample memory.

Any other nice tricks folks want to share? One per post, please.

# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
                        decreasing=FALSE, head=FALSE, n=5) {
    napply <- function(names, fn) sapply(names, function(x)
                                         fn(get(x, pos = pos)))
    names <- ls(pos = pos, pattern = pattern)
    obj.class <- napply(names, function(x) as.character(class(x))[1])
    obj.mode <- napply(names, mode)
    obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
    obj.size <- napply(names, object.size)
    obj.dim <- t(napply(names, function(x)
                        as.numeric(dim(x))[1:2]))
    vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
    obj.dim[vec, 1] <- napply(names, length)[vec]
    out <- data.frame(obj.type, obj.size, obj.dim)
    names(out) <- c("Type", "Size", "Rows", "Columns")
    if (!missing(order.by))
        out <- out[order(out[[order.by]], decreasing=decreasing), ]
    if (head)
        out <- head(out, n)
    out
}
# shorthand
lsos <- function(..., n=10) {
    .ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}

25条回答
余生无你
2楼-- · 2018-12-31 04:53

I quite like the improved objects function developed by Dirk. Much of the time though, a more basic output with the object name and size is sufficient for me. Here's a simpler function with a similar objective. Memory use can be ordered alphabetically or by size, can be limited to a certain number of objects, and can be ordered ascending or descending. Also, I often work with data that are 1GB+, so the function changes units accordingly.

showMemoryUse <- function(sort="size", decreasing=FALSE, limit) {

  objectList <- ls(parent.frame())

  oneKB <- 1024
  oneMB <- 1048576
  oneGB <- 1073741824

  memoryUse <- sapply(objectList, function(x) as.numeric(object.size(eval(parse(text=x)))))

  memListing <- sapply(memoryUse, function(size) {
        if (size >= oneGB) return(paste(round(size/oneGB,2), "GB"))
        else if (size >= oneMB) return(paste(round(size/oneMB,2), "MB"))
        else if (size >= oneKB) return(paste(round(size/oneKB,2), "kB"))
        else return(paste(size, "bytes"))
      })

  memListing <- data.frame(objectName=names(memListing),memorySize=memListing,row.names=NULL)

  if (sort=="alphabetical") memListing <- memListing[order(memListing$objectName,decreasing=decreasing),] 
  else memListing <- memListing[order(memoryUse,decreasing=decreasing),] #will run if sort not specified or "size"

  if(!missing(limit)) memListing <- memListing[1:limit,]

  print(memListing, row.names=FALSE)
  return(invisible(memListing))
}

And here is some example output:

> showMemoryUse(decreasing=TRUE, limit=5)
      objectName memorySize
       coherData  713.75 MB
 spec.pgram_mine  149.63 kB
       stoch.reg  145.88 kB
      describeBy    82.5 kB
      lmBandpass   68.41 kB
查看更多
浅入江南
3楼-- · 2018-12-31 04:53
  1. I'm fortunate and my large data sets are saved by the instrument in "chunks" (subsets) of roughly 100 MB (32bit binary). Thus I can do pre-processing steps (deleting uninformative parts, downsampling) sequentially before fusing the data set.

  2. Calling gc () "by hand" can help if the size of the data get close to available memory.

  3. Sometimes a different algorithm needs much less memory.
    Sometimes there's a trade off between vectorization and memory use.
    compare: split & lapply vs. a for loop.

  4. For the sake of fast & easy data analysis, I often work first with a small random subset (sample ()) of the data. Once the data analysis script/.Rnw is finished data analysis code and the complete data go to the calculation server for over night / over weekend / ... calculation.

查看更多
心情的温度
4楼-- · 2018-12-31 04:54

I use the data.table package. With its := operator you can :

  • Add columns by reference
  • Modify subsets of existing columns by reference, and by group by reference
  • Delete columns by reference

None of these operations copy the (potentially large) data.table at all, not even once.

  • Aggregation is also particularly fast because data.table uses much less working memory.

Related links :

查看更多
长期被迫恋爱
5楼-- · 2018-12-31 04:56

If you really want to avoid the leaks, you should avoid creating any big objects in the global environment.

What I usually do is to have a function that does the job and returns NULL — all data is read and manipulated in this function or others that it calls.

查看更多
心情的温度
6楼-- · 2018-12-31 04:57

Just to note that data.table package's tables() seems to be a pretty good replacement for Dirk's .ls.objects() custom function (detailed in earlier answers), although just for data.frames/tables and not e.g. matrices, arrays, lists.

查看更多
还给你的自由
7楼-- · 2018-12-31 04:58

I never save an R workspace. I use import scripts and data scripts and output any especially large data objects that I don't want to recreate often to files. This way I always start with a fresh workspace and don't need to clean out large objects. That is a very nice function though.

查看更多
登录 后发表回答