Split large files using python

2020-05-21 04:43发布

问题:

I have some trouble trying to split large files (say, around 10GB). The basic idea is simply read the lines, and group every, say 40000 lines into one file. But there are two ways of "reading" files.

1) The first one is to read the WHOLE file at once, and make it into a LIST. But this will require loading the WHOLE file into memory, which is painful for the too large file. (I think I asked such questions before) In python, approaches to read WHOLE file at once I've tried include:

input1=f.readlines()

input1 = commands.getoutput('zcat ' + file).splitlines(True)

input1 = subprocess.Popen(["cat",file],
                              stdout=subprocess.PIPE,bufsize=1)

Well, then I can just easily group 40000 lines into one file by: list[40000,80000] or list[80000,120000] Or the advantage of using list is that we can easily point to specific lines.

2)The second way is to read line by line; process the line when reading it. Those read lines won't be saved in memory. Examples include:

f=gzip.open(file)
for line in f: blablabla...

or

for line in fileinput.FileInput(fileName):

I'm sure for gzip.open, this f is NOT a list, but a file object. And seems we can only process line by line; then how can I execute this "split" job? How can I point to specific lines of the file object?

Thanks

回答1:

NUM_OF_LINES=40000
filename = 'myinput.txt'
with open(filename) as fin:
    fout = open("output0.txt","wb")
    for i,line in enumerate(fin):
      fout.write(line)
      if (i+1)%NUM_OF_LINES == 0:
        fout.close()
        fout = open("output%d.txt"%(i/NUM_OF_LINES+1),"wb")

    fout.close()


回答2:

If there's nothing special about having a specific number of file lines in each file, the readlines() function also accepts a size 'hint' parameter that behaves like this:

If given an optional parameter sizehint, it reads that many bytes from the file and enough more to complete a line, and returns the lines from that. This is often used to allow efficient reading of a large file by lines, but without having to load the entire file in memory. Only complete lines will be returned.

...so you could write that code something like this:

# assume that an average line is about 80 chars long, and that we want about 
# 40K in each file.

SIZE_HINT = 80 * 40000

fileNumber = 0
with open("inputFile.txt", "rt") as f:
   while True:
      buf = f.readlines(SIZE_HINT)
      if not buf:
         # we've read the entire file in, so we're done.
         break
      outFile = open("outFile%d.txt" % fileNumber, "wt")
      outFile.write(buf)
      outFile.close()
      fileNumber += 1 


回答3:

For a 10GB file, the second approach is clearly the way to go. Here is an outline of what you need to do:

  1. Open the input file.
  2. Open the first output file.
  3. Read one line from the input file and write it to the output file.
  4. Maintain a count of how many lines you've written to the current output file; as soon as it reaches 40000, close the output file, and open the next one.
  5. Repeat steps 3-4 until you've reached the end of the input file.
  6. Close both files.


回答4:

chunk_size = 40000
fout = None
for (i, line) in enumerate(fileinput.FileInput(filename)):
    if i % chunk_size == 0:
        if fout: fout.close()
        fout = open('output%d.txt' % (i/chunk_size), 'w')
    fout.write(line)
fout.close()


回答5:

Obviously, as you are doing work on the file, you will need to iterate over the file's contents in some way -- whether you do that manually or you let a part of the Python API do it for you (e.g. the readlines() method) is not important. In big O analysis, this means you will spend O(n) time (n being the size of the file).

But reading the file into memory requires O(n) space also. Although sometimes we do need to read a 10 gb file into memory, your particular problem does not require this. We can iterate over the file object directly. Of course, the file object does require space, but we have no reason to hold the contents of the file twice in two different forms.

Therefore, I would go with your second solution.



标签: python split