big integers when reading file with readr in r

2019-07-12 01:43发布

问题:

I wanted to use the readr package since I will work on some bigger files in the future. My problem is, that there is a column called Intensity which has some very big values (e.g. 5493500000). My problem is, the first time this big value appears is in line 2200 and readr already defined the column as integer instead of numeric and produces a buffer overflow.

Is there a way to only provide a single column type to the read_tsv function, since I don't want to provide all (about) 40 columns the correct type.

Any help os appreciated.

回答1:

You need the argument col_types = cols(Intensity = col_double()), as per the manual, this will prevent imputation of the column type based on the first 1000 rows. If you only want a subset of the columns use cols_only.



标签: r readr