I have hundreds of large CSV files (sizes vary from 10k lines to 100k lines in each) and some of them have badly formed descriptions with quotes within quotes so they might look something like
ID,Description,x
3434,"abc"def",988
2344,"fred",3484
2345,"fr""ed",3485
2346,"joe,fred",3486
I need to be able to cleanly parse all of these lines in R as CSV. dput()'ing it and reading ...
txt <- c("ID,Description,x",
"3434,\"abc\"def\",988",
"2344,\"fred\",3484",
"2345,\"fr\"\"ed\",3485",
"2346,\"joe,fred\",3486")
read.csv(text=txt[1:4], colClasses='character')
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on 'text'
If we change the quoting and do not include the last line with the embedded comma - it works well
read.csv(text=txt[1:4], colClasses='character', quote='')
However, if we change the quoting and include the last line with the embedded comma...
read.csv(text=txt[1:5], colClasses='character', quote='')
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 1 did not have 4 elements
EDIT x2: Should have said that unfortunately some of the descriptions have commas in them - code is edited above.
Change the
quote
setting:Edit to deal with errant commas:
No idea, if that is sufficiently efficient for your use case.
As there is only one quoted column in this set of nasty files, I can do a
read.csv()
on each side to handle the other unquoted columns left and right of the quoted column, so my current solution based on the info from both @agstudy and @rolandso running this on a wider set of data works thankfully.
You can use
readLines
and extract element usingregmatches
between,"
and",