I am using nodejs to parse xlsx files with module "jsxlsx_async" and values will be stored in mongodb. My code:
xlsx(file, function(err,wb){
if (err){
//handling err
}
//get data array
wb.getSheetDataByName('Sheet1', function(err,data){
if (err){
//handling err
}
//handling data
console.log(data);
});
});
Using: Nodejs: v0.10.25, MongoDB: v2.2.6, OS: win8, RAM:6GB
My steps: 1.read uploaded xlsx file and saving those read values into an JS object. 2.Save the read values into mongodb collections by iterating the values on the JS object.
This works fine with smaller xlsx files but I wanted to parse xlsx files larger than 50MB.
My problem is where I am storing the entire xlsx values in a single JS object. Please provide some better ideas for a solution. Is there any better way to read xlsx by row and saving the values at once a row is read?
I had a similar problem before. I need to read a huge JSON object from a txt file, but the process was killed because it ran out of memory. Regarding this problem, my solution was to split this huge file into 2 files.
Regarding your problem, my suggestions are:
Try increasing memory limit of v8 engine. https://github.com/joyent/node/wiki/FAQ Example (8192 means 8GB):
If #1 does not work, try reading xlsx file row by row with this lib: https://github.com/ffalt/xlsx-extract
If #1, #2 do not work, try https://github.com/extrabacon/xlrd-parser