有人建议从终端读取的音频数据,开始和创造书面从开始到结束的副本,然后简单地播放逆转音频数据。
是否有现有的iOS示例如何做到这一点?
我发现叫MixerHost实例项目,这在某些时候使用的AudioUnitSampleType
控股已经从文件中读取的音频数据,并将其分配给一个缓冲。
这被定义为:
typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24
而根据苹果:
对于音频单元和用于iPhone OS其它音频处理规范音频样本类型是具有8.24-bit定点非交叉的样品的线性PCM。
所以换句话说,它非交叉持有线性PCM音频数据。
但我无法揣摩出这个数据beeing读取,并在那里保存。 下面是加载音频数据并对其进行缓冲的代码:
- (void) readAudioFilesIntoMemory {
for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) {
NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);
// Instantiate an extended audio file object.
ExtAudioFileRef audioFileObject = 0;
// Open an audio file and associate it with the extended audio file object.
OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);
if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}
// Get the audio file's length in frames.
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}
// Assign the frame count to the soundStructArray instance variable
soundStructArray[audioFile].frameCount = totalFramesInFile;
// Get the audio file's number of channels.
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// Allocate memory in the soundStructArray instance variable to hold the left channel,
// or mono, audio data
soundStructArray[audioFile].audioDataLeft =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
AudioStreamBasicDescription importFormat = {0};
if (2 == channelCount) {
soundStructArray[audioFile].isStereo = YES;
// Sound is stereo, so allocate memory in the soundStructArray instance variable to
// hold the right channel audio data
soundStructArray[audioFile].audioDataRight =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
importFormat = stereoStreamFormat;
} else if (1 == channelCount) {
soundStructArray[audioFile].isStereo = NO;
importFormat = monoStreamFormat;
} else {
NSLog (@"*** WARNING: File format not supported - wrong number of channels");
ExtAudioFileDispose (audioFileObject);
return;
}
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
result = ExtAudioFileSetProperty (
audioFileObject,
kExtAudioFileProperty_ClientDataFormat,
sizeof (importFormat),
&importFormat
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}
// Set up an AudioBufferList struct, which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
if (2 == channelCount) {
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight;
}
// Perform a synchronous, sequential read of the audio data out of the file and
// into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
result = ExtAudioFileRead (
audioFileObject,
&numberOfPacketsToRead,
bufferList
);
free (bufferList);
if (noErr != result) {
[self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];
// If reading from the file failed, then free the memory for the sound buffer.
free (soundStructArray[audioFile].audioDataLeft);
soundStructArray[audioFile].audioDataLeft = 0;
if (2 == channelCount) {
free (soundStructArray[audioFile].audioDataRight);
soundStructArray[audioFile].audioDataRight = 0;
}
ExtAudioFileDispose (audioFileObject);
return;
}
NSLog (@"Finished reading file %i into memory", audioFile);
// Set the sample index to zero, so that playback starts at the
// beginning of the sound.
soundStructArray[audioFile].sampleNumber = 0;
// Dispose of the extended audio file object, which also
// closes the associated file.
ExtAudioFileDispose (audioFileObject);
}
}
该部载有被逆转具有音频样本的阵列? 难道是AudioUnitSampleType
?
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
注意:audioDataLeft被定义为AudioUnitSampleType
,这是一个SINT32但不是阵列。
我发现了一个线索核心音频邮件列表 :
那么,无关IPH * N *据我所知(除非一些音频API已被省略 - 我不是那个计划的成员)。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为你提供你需要读取或写入一个咖啡馆,并访问其流/频道的内容。 基本上,你要倒读每个通道/流,所以,如果你不需要的音频文件的属性是非常简单的,一旦你有该通道的数据的处理,假设它是不是压缩格式。 考虑到格式的咖啡馆可以表示数字,这可能需要的代码几行比你的想法。 一旦你对非压缩数据的句柄,它应该是约倒车串一样简单。 然后,你当然会在颠倒数据替换该文件的数据,或者你可以只给音频输出(或任何你要发送的反转的信号)读数你有什么流倒退。
这是我尝试过,但是当我给你我的反转缓冲区两个通道的MDATA,我什么都没听到:
AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
reversedData[j] = leftData[i];
j++;
}