Gulp: custom pipe to transform a lot of files in a

2019-08-14 17:39发布

问题:

edit it seems I'm trying to create some concat gulp pipe that first transforms the contents of the various files.

I have been using through2 to write a custom pipe. Scope: Using gulp-watch, run a task that loads all of them, transform them in memory and output a few files.

Through2 comes with a "highWaterMark" option that defaults to 16 files being read at a time. My pipe doesn't need to be memory optimized (it reads dozens of <5kb jsons, runs some transforms and outputs 2 or 3 jsons). But I'd like to understand the preferred approach.

I'd like to find a good resource / tutorial explaining how such situations are handled, any lead's welcome.

Thanks,

回答1:

Ok, found my problem.

When using through2 to create a custom pipe, in order to "consume" the data (and not hit the highWaterMark limit), one simply has to add an .on('data', () => ...) handler, like in the following example:

function processAll18nFiles(done) {
  const dictionary = {};
  let count = 0; 
  console.log('[i18n] Rebuilding...');
  gulp
    .src(searchPatternFolder)
    .pipe(
      through.obj({ highWaterMark: 1, objectMode: true }, (file, enc, next) => {
        const { data, path } = JSON.parse(file.contents.toString('utf8'));
        next(null, { data, path });
      })
    )
    // this line fixes my issue, the highWaterMark doesn't cause a limitation now
    .on('data', ({ data, path }) => ++count && composeDictionary(dictionary, data, path.split('.')))
    .on('end', () =>
      Promise.all(Object.keys(dictionary).map(langKey => writeDictionary(langKey, dictionary[langKey])))
        .then(() => {
          console.log(`[i18n] Done, ${count} files processed, language count: ${Object.keys(dictionary).length}`);
          done();
        })
        .catch(err => console.log('ERROR ', err))
    );

rem mind the "done" parameter forcing the developper to make a call when it's done()



标签: gulp