I have a recurring task of splitting a set of large (about 1-2 GiB each) gzipped Apache logfiles into several parts (say chunks of 500K lines). The final files should be gzipped again to limit the disk usage.
On Linux I would normally do:
zcat biglogfile.gz | split -l500000
The resulting files files will be named xaa, xab, xac, etc So I do:
gzip x*
The effect of this method is that as an intermediate result these huge files are temporarily stored on disk. Is there a way to avoid this intermediate disk usage?
Can I (in a way similar to what xargs does) have split pipe the output through a command (like gzip) and recompress the output on the fly? Or am I looking in the wrong direction and is there a much better way to do this?
Thanks.
There's zipsplit, but that uses the zip algorithm as opposed to the gzip algorithm.
You can use the slpit --filter option as explained in the manual e.g.
Edit: not aware when
--filter
option was introduced but according to comments, it is not working incore utils 8.4
.A script like the following might suffice.