I have a chunk of data (void*) which is 2 ch, 44100 Hz, 'lpcm' 8.24-bit little-endian signed integer, deinterleaved. I need to record that chunk to a file as 2 ch, 44100 Hz, 'lpcm' 16-bit little-endian signed integer.
How do I convert data? I can imagine I need to do something like this:
uint dataByteSize = sizeof(UInt32) * samplesCount;
UInt32* source = ...;
UInt32* dest = (UInt32*)malloc(dataByteSize);
for (int i = 0; i < samplesCount; ++i) {
UInt32 sourceSample = source[i];
UInt32 destSample = sourceSample>>24;
dest[i] = destSample;
}
But how do I convert deinterleaved to interleaved?
I tested the popular method with shifting 9 bits and for some reason it does not work for me, as I further use the result to encode to ogg. The resulted ogg was verry noisy. What did work is this function, based on a method I found in audiograph https://github.com/tkzic/audiograph
I have read the following clip in audiograph https://github.com/tkzic/audiograph
Ok, I've spent some time investigating the issue and realized that question contains too few information to be answered =) So heres the deal:
First, about non-interleaved: I initially thought that it would look like this: l1 l2 l3 l4...ln r1 r2 r3 r4...rn But it turned out that in my data right channel was just missing. It turned out that it wasn't a non-interleaved data, it was just a plain mono data. And yes, it should always be multiple buffers in case data is actually non-interleaved. If it's interleaved, it should be l1 r1 l2 r2 l3 r3 l4 r4...
Second, about actual transformation: it all depends on the range of samples. In my case (and in any case where core audio is involved if I'm correct) fixed-point 8.24 values should range between (-1, 1), while 16-bit signed values should range between (-32768, 32767). So 8.24 value will always have its first 8 bits set either to 0 (in case it is positive) or to 1 (in case it is negative). This first 8 bits should be removed (preserving sign ofc). Also you can remove as many trailing bits as you want - it'll just reduce quility of the sound, but it wont ruin the sound. In case of converting to 16 bits signed format, bits 8-22 (15 bits that is) will actually contain the data we need to use for SInt16. Bit 7 can be used as the sign bit. So to convert 8.24 to SInt16 you just need to shift 9 bits right (9 because you need to preserve the sign) and cast to SInt16
11111111 10110110 11101110 10000011 - > 11111111 11111111 (11011011 01110111)
00000000 01101111 00000000 11000001 - > 00000000 00000000 (00110111 10000000)
That's it. Nothing more then iterating through array and shifting bits right. Hope that's gonna save someone couple of hours.
Best description can be found at http://lists.apple.com/archives/coreaudio-api/2011/Feb/msg00083.html
So,
But!
Ok?