I'm trying to reverse audio in iOS with AVAsset and AVAssetWriter. The following code is working, but the output file is shorter than input. For example, input file has 1:59 duration, but output 1:50 with the same audio content.
- (void)reverse:(AVAsset *)asset
{
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:128000], AVEncoderBitRateKey,
[NSData data], AVChannelLayoutKey,
nil];
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.m4a"];
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeAppleM4A
error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];
while (sample != NULL) {
sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL)
continue;
[samples addObject:(__bridge id)(sample)];
CFRelease(sample);
}
NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];
for (id reversedSample in reversedSamples) {
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
}
else {
[NSThread sleepForTimeInterval:0.05];
}
}
[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
[writer finishWriting];
});
}
UPDATE:
If I write samples directly in first while
loop - everything is ok (even with writerInput.readyForMoreMediaData
checking). In this case result file has exactly the same duration as original. But if I write the same samples from reversed NSArray
- the result is shorter.
It is not sufficient to write the audio samples in the reverse order. The sample data needs to be reversed itself, and its timing information needs to be properly set.
In Swift, we create an extension to AVAsset.
The samples must be processed as decompressed samples. To that end create audio reader settings with kAudioFormatLinearPCM:
Use our AVAsset extension method audioReader:
to create an audioReader (AVAssetReader) and audioReaderOutput (AVAssetReaderTrackOutput) for reading the audio samples.
We need to keep track of the audio sample and the new timing infomation:
Now start reading samples. And for each audio sample obtain its timing information to produce new timing information that will be relative to the end of the audio track (because we will be writing it back in reverse order).
In other words we will adjust the presentation times of the samples.
So to “process sample” we use CMSampleBufferGetSampleTimingInfoArray to get the timingInfo (CMSampleTimingInfo):
Get the presentation time and duration:
Calculate the end time for the sample:
And now calculate the new presentation time relative to the end of the track:
And use it to set the timingInfo:
Finally save the audio sample buffer and its timing info, we need it later when we create the reversed sample:
We need an AVAssetWriter:
Now when writing the samples in reverse order with assetWriter they need to be compressed also, and we need settings for that. We also need a ‘source format hint’ and can acquire this from an uncompressed sample buffer:
Now we can create the AVAssetWriterInput, add it to the writer and start writing:
Now iterate throught he samples, in reverse order, and for each reverse the samples themselves.
We have an extension for CMSampleBuffer that does just that, called ‘reverse’.
In lieu of using requestMediaDataWhenReady we do this as follows:
So the last thing to explain is how do you reverse the audio sample in the ‘reverse’ method?
We create an extension to CMSampleBuffer that takes a sample buffer and returns the properly timed reversed sample buffer, as an extension on CMSampleBuffer:
The data that has to be reversed needs to be obtained using the method:
The CMSampleBuffer header files descibes this method as follows:
“Creates an AudioBufferList containing the data from the CMSampleBuffer, and a CMBlockBuffer which references (and manages the lifetime of) the data in that AudioBufferList.”
Call it as follows, where ‘self’ refers to the CMSampleBuffer we are reversing since this is an extension:
Now you can access the raw data as:
Reversing data we need to access the data as an array of ‘samples’ called sampleArray, and is done as follows in Swift:
Now reverse the array sampleArray:
Using the reversed samples we need to create a new CMSampleBuffer that contains the reversed samples and the new timing info which we generated previously while we read the audio samples from the source file.
Now we replace the data in the CMBlockBuffer we previously obtained with CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer:
First reassign ‘samples’ using the reversed array:
Finally create the new sample buffer using CMSampleBufferCreate. This function needs two arguments we can get from the original sample buffer, namely the formatDescription and numberOfSamples:
Now create the new sample buffer with the reversed blockBuffer and most notably the new timing information that was passed as an argument to the function ‘reverse’ that we are defining:
And that’s all there is to it!
As a final note the Core Audio and AVFoundation headers provide a lot of useful information, such as CoreAudioTypes.h, CMSampleBuffer.h, and many more.
Complete example for reverse video and audio using Swift 5 into the same asset output, audio processed using above recommendations:
Happy coding!
Print out the size of each buffer in number of samples (through the "reading" readerOuput while loop), and repeat in the "writing" writerInput for-loop. This way you can see all the buffer sizes and see if they add up.
For example, are you missing or skipping a buffer
if (writerInput.readyForMoreMediaData)
is false, you "sleep", but then proceed to the next reversedSample in reversedSamples (that buffer effectively gets dropped from the writerInput)UPDATE (based on comments): I found in the code, there are two problems:
[NSNumber numberWithInt:1], AVNumberOfChannelsKey
. Look at the info on output and input files:size_t sampleSize = CMSampleBufferGetNumSamples(sample);
The output looks like:
This shows that you're reversing the order of each buffer of 8192 samples, but in each buffer the audio is still "facing forward". We can see this in this screen shot I took of a correctly reversed (sample-by-sample) versus your buffer reversal:
I think your current scheme can work if you also reverse each sample each 8192 buffer. I personally would not recommend using NSArray enumerators for signal-processing, but it can work if you operate at the sample-level.