The csv file to be processed does not fit into the memory. How can one read ~20K random lines of it to do basic statistics on the selected data frame?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
You can also just do it in the terminal with perl.
perl -ne 'print if (rand() < .01)' biglist.txt > subset.txt
This won't necessarily get you exactly 20,000 lines. (Here it'll grab about .01 or 1% of the total lines.) It will, however, be really really fast, and you'll have a nice copy of both files in your directory. You can then load the smaller file into R however you want.
回答2:
Try this based on examples 6e and 6f on the sqldf github home page:
library(sqldf)
DF <- read.csv.sql("x.csv", sql = "select * from file order by random() limit 20000")
See ?read.csv.sql
using other arguments as needed based on the particulars of your file.
回答3:
This should work:
RowsInCSV = 10000000 #Or however many rows there are
List <- lapply(1:20000, function(x) read.csv("YourFile.csv", nrows=1, skip = sample(1, RowsInCSV), header=F)
DF = do.call(rbind, List)
回答4:
The following can be used in case you have an ID or something similar in your data. Take a sample of IDs, then take the subset of the data using the sampled ids.
sampleids <- sample(data$id,1000)
newdata <- subset(data, data$id %in% sampleids)