Reading an entire binary file into Python

2020-06-16 02:33发布

I need to import a binary file from Python -- the contents are signed 16-bit integers, big endian.

The following Stack Overflow questions suggest how to pull in several bytes at a time, but is this the way to scale up to read in a whole file?

I thought to create a function like:

from numpy import *
import os

def readmyfile(filename, bytes=2, endian='>h'):
    totalBytes = os.path.getsize(filename)
    values = empty(totalBytes/bytes)
    with open(filename, 'rb') as f:
        for i in range(len(values)):
            values[i] = struct.unpack(endian, f.read(bytes))[0]
    return values

filecontents = readmyfile('filename')

But this is quite slow (the file is 165924350 bytes). Is there a better way?

标签: python numpy
5条回答
Explosion°爆炸
2楼-- · 2020-06-16 03:17

You're reading and unpacking 2 bytes at a time

values[i] = struct.unpack(endian,f.read(bytes))[0]

Why don't you read like, 1024 bytes at a time?

查看更多
Melony?
3楼-- · 2020-06-16 03:21

I would directly read until EOF (it means checking for receiving an empty string), removing then the need to use range() and getsize.
Alternatively, using xrange (instead of range) should improve things, especially for memory usage.
Moreover, as Falmarri suggested, reading more data at the same time would improve performance quite a lot.

That said, I would not expect miracles, also because I am not sure a list is the most efficient way to store all that amount of data.
What about using NumPy's Array, and its facilities to read/write binary files? In this link there is a section about reading raw binary files, using numpyio.fread. I believe this should be exactly what you need.

Note: personally, I have never used NumPy; however, its main raison d'etre is exactly handling of big sets of data - and this is what you are doing in your question.

查看更多
Emotional °昔
4楼-- · 2020-06-16 03:27

I have had the same kind of problem, although in my particular case I have had to convert a very strange binary format (500 MB) file with interlaced blocks of 166 elements that were 3-bytes signed integers; so I've had also the problem of converting from 24-bit to 32-bit signed integers that slow things down a little.

I've resolved it using NumPy's memmap (it's just a handy way of using Python's memmap) and struct.unpack on large chunk of the file.

With this solution I'm able to convert (read, do stuff, and write to disk) the entire file in about 90 seconds (timed with time.clock()).

I could upload part of the code.

查看更多
来,给爷笑一个
5楼-- · 2020-06-16 03:29

Use numpy.fromfile.

查看更多
乱世女痞
6楼-- · 2020-06-16 03:32

I think the bottleneck you have here is twofold.

Depending on your OS and disc controller, the calls to f.read(2) with f being a bigish file are usually efficiently buffered -- usually. In other words, the OS will read one or two sectors (with disc sectors usually several KB) off disc into memory because this is not a lot more expensive than reading 2 bytes from that file. The extra bytes are cached efficiently in memory ready for the next call to read that file. Don't rely on that behavior -- it might be your bottleneck -- but I think there are other issues here.

I am more concerned about the single byte conversions to a short and single calls to numpy. These are not cached at all. You can keep all the shorts in a Python list of ints and convert the whole list to numpy when (and if) needed. You can also make a single call struct.unpack_from to convert everything in a buffer vs one short at a time.

Consider:

#!/usr/bin/python

import random
import os
import struct
import numpy
import ctypes

def read_wopper(filename,bytes=2,endian='>h'):
    buf_size=1024*2
    buf=ctypes.create_string_buffer(buf_size)
    new_buf=[]

    with open(filename,'rb') as f:
        while True:
            st=f.read(buf_size)
            l=len(st)
            if l==0: 
                break
            fmt=endian[0]+str(l/bytes)+endian[1]    
            new_buf+=(struct.unpack_from(fmt,st))

    na=numpy.array(new_buf)        
    return na

fn='bigintfile'

def createmyfile(filename):
    bytes=165924350
    endian='>h'
    f=open(filename,"wb")
    count=0

    try: 
        for int in range(0,bytes/2):
            # The first 32,767 values are [0,1,2..0x7FFF] 
            # to allow testing the read values with new_buf[value<0x7FFF]
            value=count if count<0x7FFF else random.randint(-32767,32767)
            count+=1
            f.write(struct.pack(endian,value&0x7FFF))

    except IOError:
        print "file error"

    finally:
        f.close()

if not os.path.exists(fn):
    print "creating file, don't count this..."
    createmyfile(fn)
else:    
    read_wopper(fn)
    print "Done!"

I created a file of random shorts signed ints of 165,924,350 bytes (158.24 MB) which comports to 82,962,175 signed 2 byte shorts. With this file, I ran the read_wopper function above and it ran in:

real        0m15.846s
user        0m12.416s
sys         0m3.426s

If you don't need the shorts to be numpy, this function runs in 6 seconds. All this on OS X, python 2.6.1 64 bit, 2.93 gHz Core i7, 8 GB ram. If you change buf_size=1024*2 in read_wopper to buf_size=2**16 the run time is:

real        0m10.810s
user        0m10.156s
sys         0m0.651s

So your main bottle neck, I think, is the single byte calls to unpack -- not your 2 byte reads from disc. You might want to make sure that your data files are not fragmented and if you are using OS X that your free disc space (and here) is not fragmented.

Edit I posted the full code to create then read a binary file of ints. On my iMac, I consistently get < 15 secs to read the file of random ints. It takes about 1:23 to create since the creation is one short at a time.

查看更多
登录 后发表回答