Can you recommend a Java library for reading, parsing, validating and mapping rows in a comma separated value (CSV) file to Java value objects (JavaBeans)?
相关问题
- Delete Messages from a Topic in Apache Kafka
- Jackson Deserialization not calling deserialize on
- How to maintain order of key-value in DataFrame sa
- StackExchange API - Deserialize Date in JSON Respo
- Difference between Types.INTEGER and Types.NULL in
I can recommend SuperCSV. Simple to use, and did everything I needed.
We have used http://opencsv.sourceforge.net/ with good success
I also came across another question with good links: Java lib or app to convert CSV to XML file?
I find Flatpack to be really good with handling quirky CSV files (escapes, quotes, bad records, etc.)
Super CSV is a great choice for reading/parsing, validating and mapping CSV files to POJOs!
We (the Super CSV team) have just released a new version (you can download it from SourceForge or Maven).
Reading a CSV file
The following example uses
CsvDozerBeanReader
(a new reader we've just released that uses Dozer for bean mapping with deep mapping and index-based mapping support) - it's based on the example from our website. If you don't need the Dozer functionality (or you just want a simple standalone dependency), then you can useCsvBeanReader
instead (see this code example).Example CSV file
Here is an example CSV file that represents responses to a survey. It has a header and 3 rows of data, all with 8 columns.
Defining the mapping from CSV to POJO
Each row of CSV will be read into a SurveyResponse class, each of which has a List of Answers. In order for the mapping to work, your classes should be valid Javabeans (i.e have a default no-arg constructor and have getters/setters defined for each field).
In Super CSV you define the mapping with a simple String array - each element of the array corresponds to a column in the CSV file.
With
CsvDozerBeanMapper
you can use:simple field mappings (e.g.
firstName
)deep mappings (e.g.
address.country.code
)indexed mapping (e.g.
middleNames[1]
- zero-based index for arrays or Collections)deep + indexed mapping (e.g.
person.middleNames[1]
)The following is the field mapping for this example - it uses a combination of these:
Conversion and Validation
Super CSV has a useful library of cell processors, which can be used to convert the Strings from the CSV file to other data types (e.g. Date, Integer), or to do constraint validation (e.g. mandatory/optional, regex matching, range checking).
Using cell processors is entirely optional - without them each column of CSV will be a String, so each field must be a String also.
The following is the cell processor configuration for the example. As with the field mapping, each element in the array represents a CSV column. It demonstrates how cell processors can transform the CSV data to the data type of your field, and how they can be chained together.
Reading
Reading with Super CSV is very flexible: you supply your own
Reader
(so you can read from a file, the classpath, a zip file, etc), and the delimiter and quote character are configurable via preferences (of which there are a number of pre-defined configurations that cater for most usages).The code below is pretty self-explanatory.
Create the reader (with your
Reader
and preferences)(Optionally) read the header
Configure the bean mapping
Keep calling
read()
until you get anull
(end of file)Close the reader
Code:
Output:
More Information
You can find a lot more information on the website!
I've had good success both parsing and writing CSV files from Java with OpenCSV. If you want to read or write Excel compatible spreadsheet with Java, the POI library from Apache is the way to go.
Hey, I have an open-source project for that: JFileHelpers. I think the main advantage is that it uses Java Annotations, take a look:
If you have this bean:
And wants to parse this file:
All you have to do is this:
Also, it supports master-detail, date and format conversion, and a lot more. Let me know what you think!
Best regards!