Quickest way to read a subset of rows of a CSV

2020-03-04 08:36发布

I have a 5GB csv with 2 million rows. The header are comma separated strings and each row are comma separated doubles with no missing or corrupted data. It is rectangular.

My objective is to read a random 10% (with or without replacement, doesn't matter) of the rows into RAM as fast as possible. An example of a slow solution (but faster than read.csv) is to read in the whole matrix with fread and then keep a random 10% of the rows.

require(data.table)
X <- data.matrix(fread('/home/user/test.csv')) #reads full data.matix
X <- X[sample(1:nrow(X))[1:round(nrow(X)/10)],] #sample random 10%

However I'm looking for the fastest possible solution (this is slow because I need to read the whole thing first, then trim it after).

The solution deserving of a bounty will give system.time() estimates of different alternatives.

Other:

  • I am using Linux
  • I don't need exactly 10% of the rows. Just approximately 10%.

3条回答
家丑人穷心不美
2楼-- · 2020-03-04 09:11

Here's a file with 100000 lines in it like this:

"","a","b","c"
"1",0.825049088569358,0.556148858508095,0.591679535107687
"2",0.161556158447638,0.250450366642326,0.575034103123471
"3",0.676798462402076,0.0854280597995967,0.842135070590302
"4",0.650981109589338,0.204736212035641,0.456373531138524
"5",0.51552157686092,0.420454133534804,0.12279288447462

$ wc -l d.csv 
100001 d.csv

So that's 100000 lines plus a header. We want to keep the header and sample each line if a random number from 0 to 1 is greater than 0.9.

$ awk 'NR==1 {print} ; rand()>.9 {print}' < d.csv >sample.csv

check:

$ head sample.csv 
"","a","b","c"
"12",0.732729186303914,0.744814146542922,0.199768838472664
"35",0.00979996216483414,0.633388962829486,0.364802648313344
"36",0.927218825090677,0.730419414117932,0.522808947600424
"42",0.383301998255774,0.349473554175347,0.311060158303007

and it has 10027 lines:

$ wc -l sample.csv 
10027 sample.csv

This took 0.033s of real time on my 4-yo box, probably the HD speed is the limiting factor here. It should scale linearly since the file is being dealt with strictly line-by-line.

You then read in sample.csv using read.csv or fread as desired:

> s = fread("sample.csv")
查看更多
迷人小祖宗
3楼-- · 2020-03-04 09:27

I think this should work pretty quickly, but let me know since I have not tried with big data yet.

write.csv(iris,"iris.csv")

fread("shuf -n 5 iris.csv")

    V1  V2  V3  V4  V5         V6
1:  37 5.5 3.5 1.3 0.2     setosa
2:  88 6.3 2.3 4.4 1.3 versicolor
3:  84 6.0 2.7 5.1 1.6 versicolor
4: 125 6.7 3.3 5.7 2.1  virginica
5: 114 5.7 2.5 5.0 2.0  virginica

This takes a random sample of N=5 for the iris dataset.

To avoid the chance of using the header row again, this might be a useful modification:

fread("tail -n+2 iris.csv | shuf -n 5", header=FALSE)

查看更多
Root(大扎)
4楼-- · 2020-03-04 09:35

You could use sqldf::read.csv.sql and an SQL command to pull the data in:

library(sqldf)
write.csv(iris, "iris.csv", quote = FALSE, row.names = FALSE) # write a csv file to test with
read.csv.sql("iris.csv","SELECT * FROM file ORDER BY RANDOM() LIMIT 10")
   Sepal_Length Sepal_Width Petal_Length Petal_Width    Species
1           6.3         2.8          5.1         1.5  virginica
2           4.6         3.1          1.5         0.2     setosa
3           5.4         3.9          1.7         0.4     setosa
4           4.9         3.0          1.4         0.2     setosa
5           5.9         3.0          4.2         1.5 versicolor
6           6.6         2.9          4.6         1.3 versicolor
7           4.3         3.0          1.1         0.1     setosa
8           4.8         3.4          1.9         0.2     setosa
9           6.7         3.3          5.7         2.5  virginica
10          5.9         3.2          4.8         1.8 versicolor

It doesn't calculate the 10% for you, but you can choose the absolute limit of rows to return.

查看更多
登录 后发表回答