C#
's BinaryReader
has a function that according to MSDN, reads an integer encoded as "seven bit integer", and then reads a string with the length of this integer.
Is there a clear documentation for the seven bit integer format (I have a rough understanding that the MSB or the LSB marks whether there are more bytes to read, and the rest bits are the data, but I'll be glad for something more exact).
Even better, is there a C
implementation for reading and writing numbers in this format?
Write7BitEncodedInt method contains the description: The lowest 7 bits of each byte encode the next 7 bits of the number. The highest bit is set when there's another byte following.
The format is described here: http://msdn.microsoft.com/en-us/library/system.io.binarywriter.write7bitencodedint.aspx
I had to explore this 7-bit format also. In one of my projects I pack some data into files using C#'s BinaryWriter and then unpack it again with BinaryReader, which works nicely.
Later I needed to implement a reader for this project's packed files for Java, too. Java has a class named DataInputStream (in java.io package), which has some similar methods. Unfortunately DataInputStream's data interpretation is very different than C#'s.
To solve my problem I ported C#'s BinaryReader to Java myself by writing a class that extends java.io.DataInputStream. Here is the method I wrote, which does exactly the same as C#'s BinaryReader.readString():
Well, the documentation for BinaryReader.Read7BitEncodedInt already says, that it expects the value to be written with BinaryWriter.Write7BitEncodedInt and that method documentation details the format:
So the integer 1259551277, in binary 1001011000100110011101000101101 will be converted into that 7-bit format as follows:
I'm not that confident in my C skills right now to provide a working implementation, though. But it's not very hard to do, based on that description.
Basically, the idea behind a 7-bit encoded
Int32
is to reduce the number of bytes required for small values. It works like this:Int32.MaxValue
would not require more than 5 bytes when only 1 bit is stolen from each byte). If the highest bit of the 5th byte is still set, you've read something that isn't a 7-bit encoded Int32.Note that since it is written byte-by-byte, endianness doesn't matter at all for these values. The following number of bytes are required for a given range of values:
Int32.MaxValue
) and -2,147,483,648 (Int32.MinValue
) to -1As you can see, the implementation is kinda dumb and always requires 5 bytes for negative values as the sign bit is the 32nd bit of the original value, always ending up in the 5th byte.
Thus, I do not recommend it for negative values or values bigger than ~250,000,000. I've only seen it used internally for the string length prefix of .NET strings (those you can read/write with
BinaryReader.ReadString
andBinaryReader.WriteString
), describing the number of characters following of which the string consists, only having positive values.While you can look up the original .NET source, I use different implementations in my BinaryData library.