is possible to apply filter to AVLayer and add it to view as addSublayer? I want to change colors and add some noise to video from camera using Swift and I don't know how.
I thought, that is possible to add filterLayer and previewLayer like this:
self.view.layer.addSublayer(previewLayer)
self.view.layer.addSublayer(filterLayer)
and this can maybe create video with my custom filter, but I think, that is possible to do that more effectively usign AVComposition
So what I need to know:
- What is simplest way to apply filter to camera video output realtime?
- Is possible to merge AVCaptureVideoPreviewLayer and CALayer?
Thanks for every suggestion..
If you're using an
AVPlayerViewController
, you can set the compositingFilter property of theview
'slayer
:See here for the compositing filter options you can use. e.g. "multiplyBlendMode", "screenBlendMode", etc.
Example of doing this in a
UIViewController
:For
let path = Bundle.main.path(forResource: "my_movie", ofType:"mp4")
, make sure you add the .mp4 file to Build Phases > Copy Bundle Resources in your Xcode project. Or check the 'add to target' boxes when you import the file.There's another alternative, use an AVCaptureSession to create instances of CIImage to which you can apply CIFilters (of which there are loads, from blurs to color correction to VFX).
Here's an example using the ComicBook effect. In a nutshell, create an AVCaptureSession:
Create an AVCaptureDevice to represent the camera, here I'm setting the back camera:
Then create a concrete implementation of the device and attach it to the session. In Swift 2, instantiating AVCaptureDeviceInput can throw an error, so we need to catch that:
Now, here's a little 'gotcha': although we don't actually use an AVCaptureVideoPreviewLayer but it's required to get the sample delegate working, so we create one of those:
Next, we create a video output, AVCaptureVideoDataOutput which we'll use to access the video feed:
Ensuring that self implements AVCaptureVideoDataOutputSampleBufferDelegate, we can set the sample buffer delegate on the video output:
The video output is then attached to the capture session:
...and, finally, we start the capture session:
Because we've set the delegate, captureOutput will be invoked with each frame capture. captureOutput is passed a sample buffer of type CMSampleBuffer and it just takes two lines of code to convert that data to a CIImage for Core Image to handle:
...and that image data is passed to our Comic Book effect which, in turn, is used to populate an image view:
I have the source code for this project available in my GitHub repo here.