sort across multiple files in linux

2020-05-21 05:02发布

问题:

I have multiple (many) files; each very large:

file0.txt
file1.txt
file2.txt

I do not want to join them into a single file because the resulting file would be 10+ Gigs. Each line in each file contains a 40-byte string. The strings are fairly well ordered right now, (about 1:10 steps is a decrease in value instead of an increase).

I would like the lines ordered. (in-place if possible?) This means some of the lines from the end of file0.txt will be moved to the beginning of file1.txt and vice versa.

I am working on Linux and fairly new to it. I know about the sort command for a single file, but am wondering if there is a way to sort across multiple files. Or maybe there is a way to make a pseudo-file made from smaller files that linux will treat as a single file.

What I know can do: I can sort each file individually and read into file1.txt to find the value larger than the largest in file0.txt (and similarly grab the lines from the end of file0.txt), join and then sort.. but this is a pain and assumes no values from file2.txt belong in file0.txt (however highly unlikely in my case)

Edit

To be clear, if the files look like this:

f0.txt
DDD
XXX
AAA

f1.txt
BBB
FFF
CCC

f2.txt
EEE
YYY
ZZZ

I want this:

f0.txt
AAA
BBB
CCC

f1.txt
DDD
EEE
FFF

f2.txt
XXX
YYY
ZZZ

回答1:

I don't know about a command doing in-place sorting, but I think a faster "merge sort" is possible:

for file in *.txt; do
    sort -o $file $file
done
sort -m *.txt | split -d -l 1000000 - output
  • The sort in the for loop makes sure the content of the input files is sorted. If you don't want to overwrite the original, simply change the value after the -o parameter. (If you expect the files to be sorted already, you could change the sort statement to "check-only": sort -c $file || exit 1)
  • The second sort does efficient merging of the input files, all while keeping the output sorted.
  • This is piped to the split command which will then write to suffixed output files. Notice the - character; this tells split to read from standard input (i.e. the pipe) instead of a file.

Also, here's a short summary of how the merge sort works:

  1. sort reads a line from each file.
  2. It orders these lines and selects the one which should come first. This line gets sent to the output, and a new line is read from the file which contained this line.
  3. Repeat step 2 until there are no more lines in any file.
  4. At this point, the output should be a perfectly sorted file.
  5. Profit!


回答2:

It isn't exactly what you asked for, but the sort(1) utility can help, a little, using the --merge option. Sort each file individually, then sort the resulting pile of files:

for f in file*.txt ; do sort -o $f < $f ; done
sort --merge file*.txt | split -l 100000 - sorted_file

(That's 100,000 lines per output file. Perhaps that's still way too small.)



回答3:

I believe that this is your best bet, using stock linux utilities:

  • sort each file individually, e.g. for f in file*.txt; do sort $f > sorted_$f.txt; done

  • sort everything using sort -m sorted_file*.txt | split -d -l <lines> - <prefix>, where <lines> is the number of lines per file, and <prefix> is the filename prefix. (The -d tells split to use numeric suffixes).

The -m option to sort lets it know the input files are already sorted, so it can be smart.



回答4:

mmap() the 3 files, as all lines are 40 bytes long, you can easily sort them in place (SIP :-). Don't forget the msync at the end.



回答5:

If the files are sorted individually, then you can use sort -m file*.txt to merge them together - read the first line of each file, output the smallest one, and repeat.