Python and zlib: Terribly slow decompressing conca

2019-06-02 10:49发布

I've been supplied with a zipped file containing multiple individual streams of compressed XML. The compressed file is 833 mb.

If I try to decompress it as a single object, I only get the first stream (about 19 kb).

I've modified the following code supplied as a answer to an older question to decompress each stream and write it to a file:

import zlib

outfile = open('output.xml', 'w')

def zipstreams(filename):
    """Return all zip streams and their positions in file."""
    with open(filename, 'rb') as fh:
        data = fh.read()
    i = 0
    print "got it"
    while i < len(data):
        try:
            zo = zlib.decompressobj()
            dat =zo.decompress(data[i:])
            outfile.write(dat)
            zo.flush()
            i += len(data[i:]) - len(zo.unused_data)
        except zlib.error:
            i += 1
    outfile.close()

zipstreams('payload')
infile.close()

This code runs and produces the desired result (all the XML data decompressed to a single file). The problem is that it takes several days to work!

Even though there are tens of thousands of streams in the compressed file, it still seems like this should be a much faster process. Roughly 8 days to decompress 833mb (estimated 3gb raw) suggests that I'm doing something very wrong.

Is there another way to do this more efficiently, or is the slow speed the result of a read-decompress-write---repeat bottleneck that I'm stuck with?

Thanks for any pointers or suggestions you have!

3条回答
姐就是有狂的资本
2楼-- · 2019-06-02 10:49

Decompressing 833 MB should take about 30 seconds on a modern processor (e.g. a 2 GHz i7). So yes, you are doing something very wrong. Attempting to decompress at every byte offset to see if you get an error is part of the problem, though not all of it. There are better ways to find the compressed data. Ideally you should find or figure out the format. Alternatively, you can search for valid zlib headers using the RFC 1950 specification, though you may get false positives.

More significant may be that you are reading the entire 833 MB into memory at once, and decompressing the 3 GB to memory, possibly in large pieces each time. How much memory does your machine have? You may be thrashing to virtual memory.

If the code you show works, then the data is not zipped. zip is a specific file format, normally with the .zip extension, that encapsulates raw deflate data within a structure of local and central directory information intended to reconstruct a directory in a file system. You must have something rather different, since your code is looking for and apparently finding zlib streams. What is the format you have? Where did you get it? How is it documented? Can you provide a dump of, say, the first 100 bytes?

The way this should be done is not to read the whole thing into memory and decompress entire streams at once, also into memory. Instead, make use of the zlib.decompressobj interface which allows you provide a piece at a time, and get the resulting available decompressed data. You can read the input file in much smaller pieces, find the decompressed data streams by using the documented format or looking for zlib (RFC 1950 headers), and then running those a chunk at a time through the decompressed object, writing out the decompressed data where you want it. decomp.unused_data can be used to detect the end of the compressed stream (as in the example you found).

查看更多
啃猪蹄的小仙女
3楼-- · 2019-06-02 10:58

It's hard to say very much without more specific knowledge of the file format you're actually dealing with, but it's clear that your algorithm's handling of substrings is quadratic-- not a good thing when you've got tens of thousands of them. So let's see what we know:

You say that the vendor states that they are

using the standard zlib compression library.These are the same compression routines on which the gzip utilities are built.

From this we can conclude that the component streams are in raw zlib format, and are not encapsulated in a gzip wrapper (or a PKZIP archive, or whatever). The authoritative documentation on the ZLIB format is here: http://tools.ietf.org/html/rfc1950

So let's assume that your file is exactly as you describe: A 32-byte header, followed by raw ZLIB streams concatenated together, without any other stuff in between. (Edit: That's not the case, after all).

Python's zlib documentation provides a Decompress class that is actually pretty well suited to churning through your file. It includes an attribute unused_data whose documentation states clearly that:

The only way to determine where a string of compressed data ends is by actually decompressing it. This means that when compressed data is contained part of a larger file, you can only find the end of it by reading data and feeding it followed by some non-empty string into a decompression object’s decompress() method until the unused_data attribute is no longer the empty string.

So, this is what you can do: Write a loop that reads through data, say, one block at a time (no need to even read the entire 800MB file into memory). Push each block to the Decompress object, and check the unused_data attribute. When it becomes non-empty, you've got a complete object. Write it to disk, create a new decompress object and initialize iw with the unused_data from the last one. This just might work (untested, so check for correctness).

Edit: Since you do have other data in your data stream, I've added a routine that aligns to the next ZLIB start. You'll need to find and fill in the two-byte sequence that identifies a ZLIB stream in your data. (Feel free to use your old code to discover it.) While there's no fixed ZLIB header in general, it should be the same for each stream since it consists of protocol options and flags, which are presumably the same for the entire run.

import zlib

# FILL IN: ZHEAD is two bytes with the actual ZLIB settings in the input
ZHEAD = CMF+FLG  

def findstart(header, buf, source):
    """Find `header` in str `buf`, reading more from `source` if necessary"""

    while buf.find(header) == -1:
        more = source.read(2**12)
        if len(more) == 0:  # EOF without finding the header
            return ''
        buf += more

    offset = buf.find(header)
    return buf[offset:]

You can then advance to the start of the next stream. I've added a try/except pair since the same byte sequence might occur outside a stream:

source = open(datafile, 'rb')
skip_ = source.read(32) # Skip non-zlib header

buf = ''
while True:
    decomp = zlib.decompressobj()
    # Find the start of the next stream
    buf = findstart(ZHEAD, buf, source)
    try:    
        stream = decomp.decompress(buf)
    except zlib.error:
        print "Spurious match(?) at output offset %d." % outfile.tell(),
        print "Skipping 2 bytes"
        buf = buf[2:]
        continue

    # Read until zlib decides it's seen a complete file
    while decomp.unused_data == '':
        block = source.read(2**12)
        if len(block) > 0:       
            stream += decomp.decompress(block)
        else:
            break # We've reached EOF

    outfile.write(stream)
    buf = decomp.unused_data # Save for the next stream
    if len(block) == 0:
        break  # EOF

outfile.close()

PS 1. If I were you I'd write each XML stream into a separate file.

PS 2. You can test whatever you do on the first MB of your file, till you get adequate performance.

查看更多
4楼-- · 2019-06-02 11:15

From what you've described in the comments, it sounds like they're concatenating together the individual files they would have sent you separately. Which means each one has a 32-byte header you need to skip.

If you don't skip those headers, it would probably have exactly the behavior you described: If you get lucky, you'll get 32 invalid-header errors and then successfully parse the next stream. If you get unlucky, the 32 bytes of garbage will look like the start of a real stream, and you'll waste a whole lot of time parsing some arbitrary number of bytes until you finally get a decoding error. (If you get really unlucky, it'll actually decode successfully, giving you a giant hunk of garbage and eating up one or more subsequent streams.)

So, try just skipping 32 bytes after each stream finishes.

Or, if you have a more reliable way of detecting the start of the next stream (this is why I told you to print out the offsets and look at the data in a hex editor, and while alexis told you to look at the zlib spec), do that instead.

查看更多
登录 后发表回答