I'm trying to copy a chunk from one binary file into a new file. I have the byte offset and length of the chunk I want to grab.
I have tried using the dd
utility, but this seems to read and discard the data up to the offset, rather than just seeking (I guess because dd is for copying/converting blocks of data). This makes it quite slow (and slower the higher the offset. This is the command I tried:
dd if=inputfile ibs=1 skip=$offset count=$datalength of=outputfile
I guess I could write a small perl/python/whatever script to open the file, seek to the offset, then read and write the required amount of data in chunks.
Is there a utility that supports something like this?
You can use
tail -c+N
to trim the leading N bytes from input, then you can usehead -cM
to output only the first M bytes from its input.So using your variables, it would probably be:
Ah, didn't see it had to seek. Leaving this as CW.
Yes it's awkward to do this with dd today. We're considering adding skip_bytes and count_bytes params to dd in coreutils to help. The following should work though:
According to
man
dd
on FreeBSD:Using
dtruss
I verified that it does uselseek()
on an input file on Mac OS X. If you just think that it is slow then I agree with the comment that this would be due to the 1-byte block size.You can try hexdump command :
Ex.) Read 100 bytes from 'mycorefile' starting from offset 100.
Then, using another script join all the lines of the output into single line if you want.
If you simply want to see the contents :
Thanks for the other answers. Unfortunately, I'm not in a position to install additional software, so the ddrescue option is out. The head/tail solution is interesting (I didn't realise you could supply + to tail), but scanning through the data makes it quite slow.
I ended up writing a small python script to do what I wanted. The buffer size should probably be tuned to be the same as some external buffer setting, but using the value below is performant enough on my system.
You can use the
option of ddrescue.