I am trying to create a wav file from the sound input I get from the default input device of my macbook (built-in mic). However, the resultant file when imported to audacity as raw data is complete garbage.
First I initialize the audio file reference so I can later write to it in the audio unit input callback.
// struct contains audiofileID as member
MyAUGraphPlayer player = {0};
player.startingByte = 0;
// describe a PCM format for audio file
AudioStreamBasicDescription format = { 0 };
format.mBytesPerFrame = 2;
format.mBytesPerPacket = 2;
format.mChannelsPerFrame = 1;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
format.mFormatID = kAudioFormatLinearPCM;
CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("./test.wav"), kCFURLPOSIXPathStyle, false);
//CFShow (myFileURL);
CheckError(AudioFileCreateWithURL(myFileURL,
kAudioFileWAVEType,
&format,
kAudioFileFlags_EraseFile,
&player.recordFile), "AudioFileCreateWithURL failed");
Here I malloc some buffers to hold the audio data coming in from the AUHAL unit.
UInt32 bufferSizeFrames = 0;
propertySize = sizeof(UInt32);
CheckError (AudioUnitGetProperty(player->inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&propertySize), "Couldn't get buffer frame size from input unit");
UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);
printf("buffer num of frames %i", bufferSizeFrames);
if (player->streamFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved) {
int offset = offsetof(AudioBufferList, mBuffers[0]);
int sizeOfAB = sizeof(AudioBuffer);
int chNum = player->streamFormat.mChannelsPerFrame;
int inputBufferSize = offset + sizeOfAB * chNum;
//malloc buffer lists
player->inputBuffer = (AudioBufferList *)malloc(inputBufferSize);
player->inputBuffer->mNumberBuffers = chNum;
for (UInt32 i = 0; i < chNum ; i++) {
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
}
To check that the data is actually sensible, I render the audio unit and than log the first 4 bytes of each set of frames (4096) in each callback. The reason was to check that the values were in keeping with what was going into the mic. As I would talk into the mic I noticed the logged out values in this location of memory corresponded to the input. So it seems that things are working in that regard:
// render into our buffer
OSStatus inputProcErr = noErr;
inputProcErr = AudioUnitRender(player->inputUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
player->inputBuffer);
// copy from our buffer to ring buffer
Float32 someDataL = *(Float32*)(player->inputBuffer->mBuffers[0].mData);
printf("L2 input: % 1.7f \n",someDataL);
And finally, in the input callback I write the audio bytes to the file.
UInt32 numOfBytes = 4096*player->streamFormat.mBytesPerFrame;
AudioFileWriteBytes(player->recordFile,
FALSE,
player->startingByte,
&numOfBytes,
&ioData[0].mBuffers[0].mData);
player->startingByte += numOfBytes;
So I have not figured out why the data comes out sounding glitchy, distorted or not there at all. One thing is that the resultant audio file is about as long as I actually recorded for. (hitting return stops the audio units and closes the audiofile).
I'm not sure what to look at next. Has anyone attempted writing to an audiofile from the AUHAL callback and had similar results?