如何向后播放音频?(How to play audio backwards?)

2019-09-21 08:49发布

有人建议从终端读取的音频数据,开始和创造书面从开始到结束的副本,然后简单地播放逆转音频数据。

是否有现有的iOS示例如何做到这一点?

我发现叫MixerHost实例项目,这在某些时候使用的AudioUnitSampleType控股已经从文件中读取的音频数据,并将其分配给一个缓冲。

这被定义为:

typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24

而根据苹果:

对于音频单元和用于iPhone OS其它音频处理规范音频样本类型是具有8.24-bit定点非交叉的样品的线性PCM。

所以换句话说,它非交叉持有线性PCM音频数据。

但我无法揣摩出这个数据beeing读取,并在那里保存。 下面是加载音频数据并对其进行缓冲的代码:

- (void) readAudioFilesIntoMemory {

    for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile)  {

        NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);

        // Instantiate an extended audio file object.
        ExtAudioFileRef audioFileObject = 0;

        // Open an audio file and associate it with the extended audio file object.
        OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);

        if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}

        // Get the audio file's length in frames.
        UInt64 totalFramesInFile = 0;
        UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileLengthFrames,
                        &frameLengthPropertySize,
                        &totalFramesInFile
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}

        // Assign the frame count to the soundStructArray instance variable
        soundStructArray[audioFile].frameCount = totalFramesInFile;

        // Get the audio file's number of channels.
        AudioStreamBasicDescription fileAudioFormat = {0};
        UInt32 formatPropertySize = sizeof (fileAudioFormat);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileDataFormat,
                        &formatPropertySize,
                        &fileAudioFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}

        UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;

        // Allocate memory in the soundStructArray instance variable to hold the left channel, 
        //    or mono, audio data
        soundStructArray[audioFile].audioDataLeft =
            (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));

        AudioStreamBasicDescription importFormat = {0};
        if (2 == channelCount) {

            soundStructArray[audioFile].isStereo = YES;
            // Sound is stereo, so allocate memory in the soundStructArray instance variable to  
            //    hold the right channel audio data
            soundStructArray[audioFile].audioDataRight =
                (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
            importFormat = stereoStreamFormat;

        } else if (1 == channelCount) {

            soundStructArray[audioFile].isStereo = NO;
            importFormat = monoStreamFormat;

        } else {

            NSLog (@"*** WARNING: File format not supported - wrong number of channels");
            ExtAudioFileDispose (audioFileObject);
            return;
        }

        // Assign the appropriate mixer input bus stream data format to the extended audio 
        //        file object. This is the format used for the audio data placed into the audio 
        //        buffer in the SoundStruct data structure, which is in turn used in the 
        //        inputRenderCallback callback function.

        result =    ExtAudioFileSetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (importFormat),
                        &importFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}

        // Set up an AudioBufferList struct, which has two roles:
        //
        //        1. It gives the ExtAudioFileRead function the configuration it 
        //            needs to correctly provide the data to the buffer.
        //
        //        2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so 
        //            that audio data obtained from disk using the ExtAudioFileRead function
        //            goes to that buffer

        // Allocate memory for the buffer list struct according to the number of 
        //    channels it represents.
        AudioBufferList *bufferList;

        bufferList = (AudioBufferList *) malloc (
            sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
        );

        if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}

        // initialize the mNumberBuffers member
        bufferList->mNumberBuffers = channelCount;

        // initialize the mBuffers member to 0
        AudioBuffer emptyBuffer = {0};
        size_t arrayIndex;
        for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
            bufferList->mBuffers[arrayIndex] = emptyBuffer;
        }

        // set up the AudioBuffer structs in the buffer list
        bufferList->mBuffers[0].mNumberChannels  = 1;
        bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
        bufferList->mBuffers[0].mData            = soundStructArray[audioFile].audioDataLeft;

        if (2 == channelCount) {
            bufferList->mBuffers[1].mNumberChannels  = 1;
            bufferList->mBuffers[1].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
            bufferList->mBuffers[1].mData            = soundStructArray[audioFile].audioDataRight;
        }

        // Perform a synchronous, sequential read of the audio data out of the file and
        //    into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
        UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;

        result = ExtAudioFileRead (
                     audioFileObject,
                     &numberOfPacketsToRead,
                     bufferList
                 );

        free (bufferList);

        if (noErr != result) {

            [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];

            // If reading from the file failed, then free the memory for the sound buffer.
            free (soundStructArray[audioFile].audioDataLeft);
            soundStructArray[audioFile].audioDataLeft = 0;

            if (2 == channelCount) {
                free (soundStructArray[audioFile].audioDataRight);
                soundStructArray[audioFile].audioDataRight = 0;
            }

            ExtAudioFileDispose (audioFileObject);            
            return;
        }

        NSLog (@"Finished reading file %i into memory", audioFile);

        // Set the sample index to zero, so that playback starts at the 
        //    beginning of the sound.
        soundStructArray[audioFile].sampleNumber = 0;

        // Dispose of the extended audio file object, which also
        //    closes the associated file.
        ExtAudioFileDispose (audioFileObject);
    }
}

该部载有被逆转具有音频样本的阵列? 难道是AudioUnitSampleType

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

注意:audioDataLeft被定义为AudioUnitSampleType ,这是一个SINT32但不是阵列。

我发现了一个线索核心音频邮件列表 :

那么,无关IPH * N *据我所知(除非一些音频API已被省略 - 我不是那个计划的成员)。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为你提供你需要读取或写入一个咖啡馆,并访问其流/频道的内容。 基本上,你要倒读每个通道/流,所以,如果你不需要的音频文件的属性是非常简单的,一旦你有该通道的数据的处理,假设它是不是压缩格式。 考虑到格式的咖啡馆可以表示数字,这可能需要的代码几行比你的想法。 一旦你对非压缩数据的句柄,它应该是约倒车串一样简单。 然后,你当然会在颠倒数据替换该文件的数据,或者你可以只给音频输出(或任何你要发送的反转的信号)读数你有什么流倒退。

这是我尝试过,但是当我给你我的反转缓冲区两个通道的MDATA,我什么都没听到:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
    reversedData[j] = leftData[i];
    j++;
}

Answer 1:

通常情况下,正在使用ASBD时,字段描述由本说明书表示在缓冲器中的采样数据的完整布局 - 其中通常这些缓冲区是由被包含在一个AudioBufferList表示AudioBuffer。

然而,当ASBD具有kAudioFormatFlagIsNonInterleaved标志,则AudioBufferList具有不同的结构和语义。 在这种情况下,字段ASBD将描述的包含在列表中的AudioBuffers的ONE的格式,并且每个AudioBuffer列表中的被确定为具有音频数据的单个(单)信道。 然后,ASBD的mChannelsPerFrame将指示包含在AudioBufferList内AudioBuffers的总数 - ,其中每个缓冲器包含一个信道。 这主要用于与此列表的AudioUnit(和AudioConverter)表示 - 而不会在该结构的AudioHardware使用被发现。



Answer 2:

我有一个示例应用程序,它记录哪些用户说和向后播放它们的工作。 我已经使用CoreAudio的实现这一目标。 链接到应用程序代码 。

/ *由于每个样本的大小是16位(2个字节)(单信道)。 您可以通过在开始记录的端部与读向后其复制到不同的缓冲器在一个时间加载每个样品。 当你到了数据的开始你已经扭转了数据和播放将发生逆转。 * /

// set up output file
AudioFileID outputAudioFile;

AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 16000.00;
myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
myPCMFormat.mFormatFlags =  kAudioFormatFlagsCanonical;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 16;
myPCMFormat.mBytesPerPacket = 2;
myPCMFormat.mBytesPerFrame = 2;


AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl,
                       kAudioFileCAFType,
                       &myPCMFormat,
                       kAudioFileFlags_EraseFile,
                       &outputAudioFile);
// set up input file
AudioFileID inputAudioFile;
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;

AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);

theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl, kAudioFileReadPermission, 0, &inputAudioFile);

thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);

UInt32 dataSize = fileDataSize;
void* theData = malloc(dataSize);

//Read data into buffer
UInt32 readPoint  = dataSize;
UInt32 writePoint = 0;
while( readPoint > 0 )
{
    UInt32 bytesToRead = 2;

    AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData );
    AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData );

    writePoint += 2;
    readPoint -= 2;
}

free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);

希望这可以帮助。



Answer 3:

你不必分配单独的缓存来存储数据的逆转,这可能需要CPU的公平位,根据声音的长度。 向后播放声音,只要在totalFramesInFile的将SampleNumber计数器开始 - 1。

您可以修改MixerHost这样,以达到预期的效果。

替换soundStructArray[audioFile].sampleNumber = 0;soundStructArray[audioFile].sampleNumber = totalFramesInFile - 1;

请将SampleNumber SINT32而不是UInt32的。

更换你写出来的样品与本循环。

 for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { outSamplesChannelLeft[frameNumber] = dataInLeft[sampleNumber]; if (isStereo) outSamplesChannelRight[frameNumber] = dataInRight[sampleNumber]; if (--sampleNumber < 0) sampleNumber = frameTotalForSound - 1; } 

这实际上使得它向后播放。 嗯。 它已经有一段时间,因为我已经听到了MixerHost音乐。 我必须承认,我觉得这是非常令人高兴。



文章来源: How to play audio backwards?