Setting the scene
I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.
- Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
- Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
- A music audio track.
- An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.
I have this all working, can composite it and export it correctly using AVExportSession.
The problem
What I now want to do is apply filters and gradients to the ugcVideoComposition.
My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.
I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.
BOOL done = NO;
while (!done)
{
while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Let's try create an image....
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
// < Apply filters and transformations to the CIImage here
// < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
// Write things back out.
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.
The question
Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?
Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.
Much appreciated and thanks!
Try using:
SDAVAssetExportSession
SDAVAssetExportSession on GITHub
and then implementing a delegate to process the pixels
I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.
The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.
Setting up your AssetWriter
The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.
Reading and Writing
The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.
Note that you should include the QuartzCore.Framework into your XCode Project as well.
Creating the context on OSX.
Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.
Creating the PixelBuffer
There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio