Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers).
Let's assume we have a 4K video and a custom video composition with a 1080p output size, that applies an advanced filter on the video buffers.
Obviously, we don't need to filter the video in its original size, doing so we'll probably terminate the app with a memory warning (true story).
This is the video-filtering pipeline:
Getting the buffer in 4K (as CIImage
) -->
Apply filter on the CIImage
-->
the filter applies the CIKernel
Metal
filter function on the CIImage
-->
Return the filtered CIImage
to the composition
The only two places I can think of applying the resize is before we send it into the filter process or within the Metal
function.
public class VHSFilter: CIFilter {
public override var outputImage: CIImage? {
// InputImage size is 4K
guard let inputImage = self.inputImage else { return nil }
// Manipulate the image here
let roiCallback: CIKernelROICallback = { _, rect -> CGRect in
return inputImage.extent
}
// Or inside the Kernel Metal function
let outputImage = self.kernel.apply(extent: inputExtent,
roiCallback: roiCallback,
arguments: [inputImage])
return outputImage
}
}
I'm sure I'm not the first one to encounter this issue
What does one do when the incoming video-buffer are too large (memory-wise) to filter, and they need to resize on-the-fly efficiently? Without re-encoding the video before?
As warrenm says, you could use a
CILanczosScaleTransform
filter to downsample the video frames before processing. However, this would still cause AVFoundation to allocate buffers in full resolution.I assume you use a
AVMutableVideoComposition
to do the filtering? In this case you can just set therenderSize
of the composition to the target size. From the docs:This will tell AVFoundation to resample the frames (efficiently, fast) before handing them to your filter pipeline.