Why do we use only the first buffer in aurioTouch

2019-05-10 11:00发布

问题:

I'm investigating aurioTouch2 sample code.
And I noticed, that when we analize audio data, we use only the first buffer of this data, and never other buffers. in void FFTBufferManager::GrabAudioData(AudioBufferList *inBL) function:

    UInt32 bytesToCopy = min(inBL->mBuffers[0].mDataByteSize, mAudioBufferSize - mAudioBufferCurrentIndex * sizeof(Float32));
    memcpy(mAudioBuffer+mAudioBufferCurrentIndex, inBL->mBuffers[0].mData, bytesToCopy);

in function

static OSStatus PerformThru(
                            void                        *inRefCon, 
                            AudioUnitRenderActionFlags  *ioActionFlags, 
                            const AudioTimeStamp        *inTimeStamp, 
                            UInt32                      inBusNumber, 
                            UInt32                      inNumberFrames, 
                            AudioBufferList             *ioData)
if (THIS->displayMode == aurioTouchDisplayModeOscilloscopeWaveform)
{
        AudioConverterConvertComplexBuffer(THIS->audioConverter, inNumberFrames, ioData, THIS->drawABL);
        SInt8 *data_ptr = (SInt8 *)(THIS->drawABL->mBuffers[0].mData);
}

The question is why do we ignore data in inBL->mBuffers1.mData?

回答1:

Since there's only 1 mic on your iPhone, the samples in the 2 stereo channel buffers (L and R) are identical. Since the second buffer is just redundant (or in some configurations empty), that data there doesn't need to be analyzed (again).



回答2:

May be, I'm wrong, but there is now difference, what buffer to use. You have only 2 buffers: mBuffers[0] and mBuffers1. I tried to use buffer 0 in spectrogram and than buffer 1 (pronouncing equal sounds). Left part of image was done, using buffer 0; right part - using buffer 1 (right peak was made during making of snapshot).

So, you can see , there is no difference.