Playing generated audio on an iPhone

2020-06-27 07:59发布

问题:

As a throwaway project for the iPhone to get me up to speed with Objective C and the iPhone libraries, I've been trying to create an app that will play different kinds of random noise.

I've been constructing the noise as an array of floats normalized from [-1,1].

Where I'm stuck is in playing that generated data. It seems like this should be fairly simple, but I've looked into using AudioUnit and AVAudioPlayer, and neither of these seem optimal.

AudioUnit requires apparently a few hundred lines of code to do even this simple task, and AVAudioPlayer seems to require me to convert the audio into something CoreAudio can understand (as best I can tell, that means LPCM put into a WAV file).

Am I overlooking something, or are these really the best ways to play some sound data stored in array form?

回答1:

Here's some code to use AudioQueue, which I've modified from the SpeakHere example. I kind of pasted the good parts, so there may be something dangling here or there, but this should be a good start if you want to use this approach:

AudioStreamBasicDescription format;
memset(&format, 0, sizeof(format));
format.mSampleRate = 44100;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
format.mChannelsPerFrame = 1;
format.mBitsPerChannel = 16;
format.mBytesPerFrame = (format.mBitsPerChannel / 8) * format.mChannelsPerFrame;
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;

AudioQueueRef queue;

AudioQueueNewOutput(&format, 
            AQPlayer::AQOutputCallback, 
            this,  // opaque reference to whatever you like
            CFRunLoopGetCurrent(), 
            kCFRunLoopCommonModes, 
            0, 
            &queue); 

const int bufferSize = 0xA000;  // 48K - around 1/2 sec of 44kHz 16 bit mono PCM        
for (int i = 0; i < kNumberBuffers; ++i)
    AudioQueueAllocateBufferWithPacketDescriptions(queue, bufferSize, 0, &mBuffers[i]);

AudioQueueSetParameter(queue, kAudioQueueParam_Volume, 1.0);

UInt32 category = kAudioSessionCategory_MediaPlayback;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);

AudioSessionSetActive(true);

// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i)
    OutputCallback(queue, mBuffers[i]);

AudioQueueStart(queue, NULL);

The code above refers to this output callback. Each time this callback executes, fill the buffer passed in with your generated audio. Here, I'm filling it with random noise.

void OutputCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer) {
    // Fill 
    //AQPlayer* that = (AQPlayer*) inUserData;
    inCompleteAQBuffer->mAudioDataByteSize = next->mAudioDataBytesCapacity; 
    for (int i = 0; i < inCompleteAQBuffer->mAudioDataByteSize; ++i)
        next->mAudioData[i] = rand();
    AudioQueueEnqueueBuffer(queue, inCompleteAQBuffer, 0, NULL);
 }


回答2:

It sounds like you're coming from a platform that had a simple built in tone generator. The iPhone doesn't have anything like that. It's easier to play simple sounds from sound files. AudioUnit is for actually processing and generating real music.

So, yes, you do need an audio file to play a sound simply.