How do I split a huge text file in python

2019-01-17 20:34发布

I have a huge text file (~1GB) and sadly the text editor I use won't read such a large file. However, if I can just split it into two or three parts I'll be fine, so, as an exercise I wanted to write a program in python to do it.

What I think I want the program to do is to find the size of a file, divide that number into parts, and for each part, read up to that point in chunks, writing to a filename.nnn output file, then read up-to the next line-break and write that, then close the output file, etc. Obviously the last output file just copies to the end of the input file.

Can you help me with the key filesystem related parts: filesize, reading and writing in chunks and reading to a line-break?

I'll be writing this code test-first, so there's no need to give me a complete answer, unless its a one-liner ;-)

14条回答
甜甜的少女心
2楼-- · 2019-01-17 21:22

As an alternative method, using the logging library:

>>> import logging.handlers
>>> log = logging.getLogger()
>>> fh = logging.handlers.RotatingFileHandler("D://filename.txt", 
     maxBytes=2**20*100, backupCount=100) 
# 100 MB each, up to a maximum of 100 files
>>> log.addHandler(fh)
>>> log.setLevel(logging.INFO)
>>> f = open("D://biglog.txt")
>>> while True:
...     log.info(f.readline().strip())

Your files will appear as follows:

filename.txt (end of file)
filename.txt.1
filename.txt.2
...
filename.txt.10 (start of file)

This is a quick and easy way to make a huge log file match your RotatingFileHandler implementation.

查看更多
劫难
3楼-- · 2019-01-17 21:22

Or, a python version of wc and split:

lines = 0
for l in open(filename): lines += 1

Then some code to read the first lines/3 into one file, the next lines/3 into another , etc.

查看更多
趁早两清
4楼-- · 2019-01-17 21:23

This worked for me

import os

fil = "inputfile"
outfil = "outputfile"

f = open(fil,'r')

numbits = 1000000000

for i in range(0,os.stat(fil).st_size/numbits+1):
    o = open(outfil+str(i),'w')
    segment = f.readlines(numbits)
    for c in range(0,len(segment)):
        o.write(segment[c]+"\n")
    o.close()
查看更多
smile是对你的礼貌
5楼-- · 2019-01-17 21:24

I had a requirement to split csv files for import into Dynamics CRM since the file size limit for import is 8MB and the files we receive are much larger. This program allows user to input FileNames and LinesPerFile, and then splits the specified files into the requested number of lines. I can't believe how fast it works!

# user input FileNames and LinesPerFile
FileCount = 1
FileNames = []
while True:
    FileName = raw_input('File Name ' + str(FileCount) + ' (enter "Done" after last File):')
    FileCount = FileCount + 1
    if FileName == 'Done':
        break
    else:
        FileNames.append(FileName)
LinesPerFile = raw_input('Lines Per File:')
LinesPerFile = int(LinesPerFile)

for FileName in FileNames:
    File = open(FileName)

    # get Header row
    for Line in File:
        Header = Line
        break

    FileCount = 0
    Linecount = 1
    for Line in File:

        #skip Header in File
        if Line == Header:
            continue

        #create NewFile with Header every [LinesPerFile] Lines
        if Linecount % LinesPerFile == 1:
            FileCount = FileCount + 1
            NewFileName = FileName[:FileName.find('.')] + '-Part' + str(FileCount) + FileName[FileName.find('.'):]
            NewFile = open(NewFileName,'w')
            NewFile.write(Header)

        NewFile.write(Line)
        Linecount = Linecount + 1

    NewFile.close()
查看更多
霸刀☆藐视天下
6楼-- · 2019-01-17 21:26

Here is a python script you can use for splitting large files using subprocess:

"""
Splits the file into the same directory and
deletes the original file
"""

import subprocess
import sys
import os

SPLIT_FILE_CHUNK_SIZE = '5000'
SPLIT_PREFIX_LENGTH = '2'  # subprocess expects a string, i.e. 2 = aa, ab, ac etc..

if __name__ == "__main__":

    file_path = sys.argv[1]
    # i.e. split -a 2 -l 5000 t/some_file.txt ~/tmp/t/
    subprocess.call(["split", "-a", SPLIT_PREFIX_LENGTH, "-l", SPLIT_FILE_CHUNK_SIZE, file_path,
                     os.path.dirname(file_path) + '/'])

    # Remove the original file once done splitting
    try:
        os.remove(file_path)
    except OSError:
        pass

You can call it externally:

import os
fs_result = os.system("python file_splitter.py {}".format(local_file_path))

You can also import subprocess and run it directly in your program.

The issue with this approach is high memory usage: subprocess creates a fork with a memory footprint same size as your process and if your process memory is already heavy, it doubles it for the time that it runs. The same thing with os.system.

Here is another pure python way of doing this, although I haven't tested it on huge files, it's going to be slower but be leaner on memory:

CHUNK_SIZE = 5000

def yield_csv_rows(reader, chunk_size):
    """
    Opens file to ingest, reads each line to return list of rows
    Expects the header is already removed
    Replacement for ingest_csv
    :param reader: dictReader
    :param chunk_size: int, chunk size
    """
    chunk = []
    for i, row in enumerate(reader):
        if i % chunk_size == 0 and i > 0:
            yield chunk
            del chunk[:]
        chunk.append(row)
    yield chunk

with open(local_file_path, 'rb') as f:
    f.readline().strip().replace('"', '')
    reader = unicodecsv.DictReader(f, fieldnames=header.split(','), delimiter=',', quotechar='"')
    chunks = files.yield_csv_rows(reader, CHUNK_SIZE)
    for chunk in chunks:
        if not chunk:
            break
        # Do something with your chunk here
查看更多
Viruses.
7楼-- · 2019-01-17 21:27

Check out os.stat() for file size and file.readlines([sizehint]). Those two functions should be all you need for the reading part, and hopefully you know how to do the writing :)

查看更多
登录 后发表回答