Fast CSV parsing

2020-01-25 07:35发布

问题:

I have a java server app that download CSV file and parse it. The parsing can take from 5 to 45 minutes, and happens each hour.This method is a bottleneck of the app so it's not premature optimization. The code so far:

        client.executeMethod(method);
        InputStream in = method.getResponseBodyAsStream(); // this is http stream

        String line;
        String[] record;

        reader = new BufferedReader(new InputStreamReader(in), 65536);

        try {
            // read the header line
            line = reader.readLine();
            // some code
            while ((line = reader.readLine()) != null) {
                 // more code

                 line = line.replaceAll("\"\"", "\"NULL\"");

                 // Now remove all of the quotes
                 line = line.replaceAll("\"", "");     


                 if (!line.startsWith("ERROR"){
                   //bla bla 
                    continue;
                 }

                 record = line.split(",");
                 //more error handling
                 // build the object and put it in HashMap
         }
         //exceptions handling, closing connection and reader

Is there any existing library that would help me to speed up things? Can I improve existing code?

回答1:

Apache Commons CSV

Have you seen Apache Commons CSV?

Caveat On Using split

Bear in mind is that split only returns a view of the data, meaning that the original line object is not eligible for garbage collection whilst there is a reference to any of its views. Perhaps making a defensive copy will help? (Java bug report)

It also is not reliable in grouping escaped CSV columns containing commas



回答2:

opencsv

Take a look at opencsv.

This blog post, opencsv is an easy CSV parser, has example usage.



回答3:

Apart from the suggestions made above, I think you can try improving your code by using some threading and concurrency.

Following is the brief analysis and suggested solution

  1. From the code it seems that you are reading the data over the network (most possibly apache-common-httpclient lib).
  2. You need to make sure that bottleneck that you are saying is not in the data transfer over the network.
  3. One way to see is just dump the data in some file (without parsing) and see how much does it take. This will give you an idea how much time is actually spent in parsing (when compared to current observation).
  4. Now have a look at how java.util.concurrent package is used. Some of the link that you can use are (1,2)
  5. What you ca do is the tasks that you are doing in for loop can be executed in a thread.
  6. Using the threadpool and concurrency will greatly improve your performance.

Though the solution involves some effort, but at the end this will surly help you.



回答4:

The problem of your code is that it's using replaceAll and split which are very costly operation. You should definitely consider using a csv parser/reader that would do a one pass parsing.

There is a benchmark on github

https://github.com/uniVocity/csv-parsers-comparison

that unfortunately is ran under java 6. The number are slightly different under java 7 and 8. I'm trying to get more detail data for different file size but it's work in progress

see https://github.com/arnaudroger/csv-parsers-comparison



回答5:

opencsv

You should have a look at OpenCSV. I would expect that they have performance optimizations.



回答6:

A little late here, there is now a few benchmarking projects for CSV parsers. Your selection will depend on the exact use-case (i.e. raw data vs data binding etc).

  • SimpleFlatMapper
  • uniVocity
  • sesseltjonna-csv (disclaimer: I wrote this parser)


回答7:

For speed you do not want to use replaceAll, and you don't want to use regex either. What you basically always want to do in critical cases like that is making a state-machine character by character parser. I've done that having rolled the whole thing into an Iterable function. It also takes in the stream and parses it without saving it out or caching it. So if you can abort early that's likely going to go fine as well. It should also be short enough and well coded enough to make it obvious how it works.

public static Iterable<String[]> parseCSV(final InputStream stream) throws IOException {
    return new Iterable<String[]>() {
        @Override
        public Iterator<String[]> iterator() {
            return new Iterator<String[]>() {
                static final int UNCALCULATED = 0;
                static final int READY = 1;
                static final int FINISHED = 2;
                int state = UNCALCULATED;
                ArrayList<String> value_list = new ArrayList<>();
                StringBuilder sb = new StringBuilder();
                String[] return_value;

                public void end() {
                    end_part();
                    return_value = new String[value_list.size()];
                    value_list.toArray(return_value);
                    value_list.clear();
                }

                public void end_part() {
                    value_list.add(sb.toString());
                    sb.setLength(0);
                }

                public void append(int ch) {
                    sb.append((char) ch);
                }

                public void calculate() throws IOException {
                    boolean inquote = false;
                    while (true) {
                        int ch = stream.read();
                        switch (ch) {
                            default: //regular character.
                                append(ch);
                                break;
                            case -1: //read has reached the end.
                                if ((sb.length() == 0) && (value_list.isEmpty())) {
                                    state = FINISHED;
                                } else {
                                    end();
                                    state = READY;
                                }
                                return;
                            case '\r':
                            case '\n': //end of line.
                                if (inquote) {
                                    append(ch);
                                } else {
                                    end();
                                    state = READY;
                                    return;
                                }
                                break;
                            case ',': //comma
                                if (inquote) {
                                    append(ch);
                                } else {
                                    end_part();
                                    break;
                                }
                                break;
                            case '"': //quote.
                                inquote = !inquote;
                                break;
                        }
                    }
                }

                @Override
                public boolean hasNext() {
                    if (state == UNCALCULATED) {
                        try {
                            calculate();
                        } catch (IOException ex) {
                        }
                    }
                    return state == READY;
                }

                @Override
                public String[] next() {
                    if (state == UNCALCULATED) {
                        try {
                            calculate();
                        } catch (IOException ex) {
                        }
                    }
                    state = UNCALCULATED;
                    return return_value;
                }
            };
        }
    };
}

You would typically process this quite helpfully like:

for (String[] csv : parseCSV(stream)) {
    //<deal with parsed csv data>
}

The beauty of that API there is worth the rather cryptic looking function.



回答8:

Apache Commons CSV ➙ 12 seconds for million rows

Is there any existing library that would help me to speed up things?

Yes, the Apache Commons CSV project works very well in my experience.

Here is an example app that uses Apache Commons CSV library to write and read rows of 24 columns: An integer sequential number, an Instant, and the rest are random UUID objects.

For 10,000 rows, the writing and the read each take about half a second. The reading includes reconstituting the Integer, Instant, and UUID objects.

My example code lets you toggle on or off the reconstituting of objects. I ran both with a million rows. This creates a file of 850 megs. I am using Java 12 on a MacBook Pro (Retina, 15-inch, Late 2013), 2.3 GHz Intel Core i7, 16 GB 1600 MHz DDR3, Apple built-in SSD.

For a million rows, ten seconds for reading plus two seconds for parsing:

  • Writing: PT25.994816S
  • Reading only: PT10.353912S
  • Reading & parsing: PT12.219364S

Source code is a single .java file. Has a write method, and a read method. Both methods called from a main method.

I opened a BufferedReader by calling Files.newBufferedReader.

package work.basil.example;

import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVPrinter;
import org.apache.commons.csv.CSVRecord;

import java.io.*;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.time.Duration;
import java.time.Instant;
import java.util.UUID;

public class CsvReadingWritingDemo
{
    public static void main ( String[] args )
    {
        CsvReadingWritingDemo app = new CsvReadingWritingDemo();
        app.write();
        app.read();
    }

    private void write ()
    {
        Instant start = Instant.now();
        int limit = 1_000_000; // 10_000  100_000  1_000_000
        Path path = Paths.get( "/Users/basilbourque/IdeaProjects/Demo/csv.txt" );
        try (
                Writer writer = Files.newBufferedWriter( path, StandardCharsets.UTF_8 );
                CSVPrinter printer = new CSVPrinter( writer , CSVFormat.RFC4180 );
        )
        {
            printer.printRecord( "id" , "instant" , "uuid_01" , "uuid_02" , "uuid_03" , "uuid_04" , "uuid_05" , "uuid_06" , "uuid_07" , "uuid_08" , "uuid_09" , "uuid_10" , "uuid_11" , "uuid_12" , "uuid_13" , "uuid_14" , "uuid_15" , "uuid_16" , "uuid_17" , "uuid_18" , "uuid_19" , "uuid_20" , "uuid_21" , "uuid_22" );
            for ( int i = 1 ; i <= limit ; i++ )
            {
                printer.printRecord( i , Instant.now() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() , UUID.randomUUID() );
            }
        } catch ( IOException ex )
        {
            ex.printStackTrace();
        }
        Instant stop = Instant.now();
        Duration d = Duration.between( start , stop );
        System.out.println( "Wrote CSV for limit: " + limit );
        System.out.println( "Elapsed: " + d );
    }

    private void read ()
    {
        Instant start = Instant.now();

        int count = 0;
        Path path = Paths.get( "/Users/basilbourque/IdeaProjects/Demo/csv.txt" );
        try (
                Reader reader = Files.newBufferedReader( path , StandardCharsets.UTF_8) ;
        )
        {
            CSVFormat format = CSVFormat.RFC4180.withFirstRecordAsHeader();
            CSVParser parser = CSVParser.parse( reader , format );
            for ( CSVRecord csvRecord : parser )
            {
                if ( true ) // Toggle parsing of the string data into objects. Turn off (`false`) to see strictly the time taken by Apache Commons CSV to read & parse the lines. Turn on (`true`) to get a feel for real-world load.
                {
                    Integer id = Integer.valueOf( csvRecord.get( 0 ) ); // Annoying zero-based index counting.
                    Instant instant = Instant.parse( csvRecord.get( 1 ) );
                    for ( int i = 3 - 1 ; i <= 22 - 1 ; i++ ) // Subtract one for annoying zero-based index counting.
                    {
                        UUID uuid = UUID.fromString( csvRecord.get( i ) );
                    }
                }
                count++;
                if ( count % 1_000 == 0 )  // Every so often, report progress.
                {
                    //System.out.println( "# " + count );
                }
            }
        } catch ( IOException e )
        {
            e.printStackTrace();
        }

        Instant stop = Instant.now();
        Duration d = Duration.between( start , stop );
        System.out.println( "Read CSV for count: " + count );
        System.out.println( "Elapsed: " + d );
    }
}


标签: java parsing csv