I have long file I need to parse. Because it's very long I need to do it chunk by chunk. I tried this:
function parseFile(file){
var chunkSize = 2000;
var fileSize = (file.size - 1);
var foo = function(e){
console.log(e.target.result);
};
for(var i =0; i < fileSize; i += chunkSize)
{
(function( fil, start ) {
var reader = new FileReader();
var blob = fil.slice(start, chunkSize + 1);
reader.onload = foo;
reader.readAsText(blob);
})( file, i );
}
}
After running it I see only the first chunk in the console. If I change 'console.log' to jquery append to some div I see only first chunk in that div. What about other chunks? How to make it work?
I came up with a interesting idéa that is probably very fast since it will convert the blob to a ReadableByteStreamReader probably much easier too since you don't need to handle stuff like chunk size and offset and then doing it all recursive in a loop
Parsing the large file into small chunk by using the simple method:
The second argument of
slice
is actually the end byte. Your code should look something like:Or you can use this
BlobReader
for easier interface:More information:
Revamped @alediaferia answer in a class (typescript version here) and returning the result in a promise. Brave coders would even have wrapped it into an async iterator…
Example printing a whole file in the console (within an async context)
FileReader API is asynchronous so you should handle it with
block
calls. Afor loop
wouldn't do the trick since it wouldn't wait for each read to complete before reading the next chunk. Here's a working approach.