I'm importing xlsx
2007 tables into R 3.2.1patched
using package readxl 0.1.0
under Windows 7 64
. The tables' size is on the order of 25,000 rows by 200 columns.
Function read_excel()
works a treat. My only problem is with its assignment of column class (datatype) to sparsely populated columns. For example, a given column may be NA for 20,000 rows and then will take a character value on row 20,001. read_excel()
appears to default to column type numeric when scanning the first n rows of a column and finding NAs
only. The data causing the problem are chars in a column assigned numeric. When the error limit is reached, execution halts. I actually want the data in the sparse columns, so setting the error limit higher isn't a solution.
I can identify the troublesome columns by reviewing the warnings thrown. And read_excel()
has an option for asserting a column's datatype by setting argument col_types
according to the package docs:
Either NULL
to guess from the spreadsheet or a character vector containing blank
,numeric
, date
or text
.
But does this mean I have to construct a vector of length 200 populated in almost every position with blank
and text
in handful of positions corresponding to the offending columns?
There's probably a way of doing this in a couple lines of R
code. Create a vector of the required length and fill it with blank
s. Maybe another vector containing the numbers of the columns to be forced to text
, and then ... Or maybe it's possible to call out for read_excel()
just the columns for which its guesses aren't as desired.
I'd appreciate any suggestions.
Thanks in advance.
Reviewing the source, we see that there is an Rcpp call that returns the guessed column types:
You can see that by default,
nskip = 0L, n = 100L
checks the first 100 rows to guess column type. You can changenskip
to ignore the header text and increasen
(at the cost of a much slower runtime) by doing:Without looking at the .Rcpp, it's not immediately clear to me whether
nskip = 0L
skips the header row (the zeroth row in c++ counting) or skips no rows. I avoided the ambiguity by just usingnskip = 1L
, since skipping a row of my dataset doesn't impact the overall column type guesses.I have encountered a similar problem.
In my case empty rows and columns were used as separators. And there were a lot of of tables (with different formats) contained in the sheets. So,
{openxlsx}
and{readxl}
packages do not suit in this situation, cause openxlsx remove empty columns (and there is not parameter to change this behavior). Readxl package works as you described, and some data may be lost.In the result, I think, that the best solution, if you want to automatically handle big amounts of excel data, is to read sheets without changes in the 'text' format and then process data.frames according to your rules.
This function can read sheets without changes (thanks to @jack-wasey):
The internal funcitons for guessing column types can be set to any number of rows to scan. But
read_excel()
doesn't implement that (yet?).The solution below is just a rewrite of the orignal function
read_excel()
with argumentn_max
that defaults to all rows. Due to lack of imagination, this extended function is namedread_excel2
.Just replace
read_excel
withread_excel2
to evaluate column types by all rows.You might get an evil performance hit because of this extended guessing. Haven't tried on really big data sets yet, just tried on smaller data enought to verify function.
It depends on whether your data is sparse in different places in different columns, and how sparse it is. I found that having more rows didn't improve the parsing: the majority were still blank, and interpreted as text, even if later on they become dates, etc..
One work-around is to generate the first data row of your excel table to include representative data for every column, and use that to guess column types. I don't like this because I want to leave the original data intact.
Another workaround, if you have complete rows somewhere in the spreadsheet, is to use
nskip
instead ofn
. This gives the starting point for the column guessing. Say data row 117 has a full set of data:Note that you can call the function directly, without having to edit the function in the namespace.
You can then use the vector of spreadsheet types to call read_excel:
Then you can manually update any columns which it still gets wrong.
Reading the source, it looks like column types are guessed by the functions
xls_col_types
orxlsx_col_types
, which are implemented in Rcpp, but have the defaults:My C++ is very rusty, but it looks like the
n=100L
is the command telling how many rows to read.As these are non exported functions, paste in:
And in the pop-up, change the
n = 100L
to a larger number. Then rerun your file import.New solution since
readxl
version 1.x:The solution in the currently preferred answer does no longer work with newer versions than 0.1.0 of
readxl
since the used package-internal functionreadxl:::xlsx_col_types
does no longer exist.The new solution is to use the newly introduced parameter
guess_max
to increase the number of rows used to "guess" the appropriate data type of the columns:The value 1,048,576 is the maximum number of lines supported by Excel currently, see the Excel specs: https://support.office.com/en-us/article/Excel-specifications-and-limits-1672b34d-7043-467e-8e27-269d656771c3
PS: If you care about performance using all rows to guess the data type:
read_excel
seems to read the file only once and the guess is done in-memory then so the performance penalty is very small compared to the saved work.