Issues with CAPlayThrough Example

2019-08-12 04:36发布

问题:

I am trying to learn Xcode Core Audio and stumbled upon this example:

https://developer.apple.com/library/mac/samplecode/CAPlayThrough/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004443

My intention is to capture the raw audio. Everytime I hit a break point, I lose the audio. Since it is using CARingBuffer.

  1. How would you remove the time factor.I don't need real-time audio.
  2. Since it is using CARingBuffer it should keep on writing to same memory location? So why don't I hear the audio? If I have a breakpoint?

I am reading the Learning Core Audio book. But, so far I cannot figure out this part of the following code:

CARingBufferError CARingBuffer::Store(const AudioBufferList *abl, UInt32 framesToWrite, SampleTime startWrite)
{
    if (framesToWrite == 0)
        return kCARingBufferError_OK;

    if (framesToWrite > mCapacityFrames)
        return kCARingBufferError_TooMuch;      // too big!

    SampleTime endWrite = startWrite + framesToWrite;

    if (startWrite < EndTime()) {
        // going backwards, throw everything out
        SetTimeBounds(startWrite, startWrite);
    } else if (endWrite - StartTime() <= mCapacityFrames) {
        // the buffer has not yet wrapped and will not need to
    } else {
        // advance the start time past the region we are about to overwrite
        SampleTime newStart = endWrite - mCapacityFrames;   // one buffer of time behind where we're writing
        SampleTime newEnd = std::max(newStart, EndTime());
        SetTimeBounds(newStart, newEnd);
    }

    // write the new frames
    Byte **buffers = mBuffers;
    int nchannels = mNumberChannels;
    int offset0, offset1, nbytes;
    SampleTime curEnd = EndTime();

    if (startWrite > curEnd) {
        // we are skipping some samples, so zero the range we are skipping
        offset0 = FrameOffset(curEnd);
        offset1 = FrameOffset(startWrite);
        if (offset0 < offset1)
            ZeroRange(buffers, nchannels, offset0, offset1 - offset0);
        else {
            ZeroRange(buffers, nchannels, offset0, mCapacityBytes - offset0);
            ZeroRange(buffers, nchannels, 0, offset1);
        }
        offset0 = offset1;
    } else {
        offset0 = FrameOffset(startWrite);
    }

    offset1 = FrameOffset(endWrite);
    if (offset0 < offset1)
        StoreABL(buffers, offset0, abl, 0, offset1 - offset0);
    else {
        nbytes = mCapacityBytes - offset0;
        StoreABL(buffers, offset0, abl, 0, nbytes);
        StoreABL(buffers, 0, abl, nbytes, offset1);
    }

    // now update the end time
    SetTimeBounds(StartTime(), endWrite);

    return kCARingBufferError_OK;   // success
}

Thanks!

回答1:

If I understood the question well, the signal is lost while input unit (producer) being halted on a breakpoint. I presume this may be the expected behavior. CoreAudio is a pull-model engine running of the real time thread. This means under some conditions your producer hits a breakpoint, the ring buffer empties, the output unit (consumer) keeps on running, but gets nothing from the buffer while the playthrough chain is interrupted, hence the possible silence.

Perhaps this code from the example is not really the simplest one: I see it also zeroes audio buffers if ring buffer gets overrun/underrun, AFAICT. The term "raw audio" in the question is also not self-explanatory, I'm not sure what does it mean. I would suggest trying to learn async i/o using simpler circular buffers. There are few of them (without obligatory time values) on GitHub.

Please also be so kind to format the source code for easier reading.