This may be an obscure question, but I see lots of very cool samples online of how people are using the new ARKit people occlusion technology in ARKit 3 to effectively "separate" the people from the background, and apply some sort of filtering to the "people" (see here).
In looking at Apple's provided source code and documentation, I see that I can retrieve the segmentationBuffer
from an ARFrame, which I've done, like so;
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let image = frame.capturedImage
if let segementationBuffer = frame.segmentationBuffer {
// Get the segmentation's width
let segmentedWidth = CVPixelBufferGetWidth(segementationBuffer)
// Create the mask from that pixel buffer.
let sementationMaskImage = CIImage(cvPixelBuffer: segementationBuffer, options: [:])
// Smooth edges to create an alpha matte, then upscale it to the RGB resolution.
let alphaUpscaleFactor = Float(CVPixelBufferGetWidth(image)) / Float(segmentedWidth)
let alphaMatte = sementationMaskImage.clampedToExtent()
.applyingFilter("CIGaussianBlur", parameters: ["inputRadius": 2.0)
.cropped(to: sementationMaskImage.extent)
.applyingFilter("CIBicubicScaleTransform", parameters: ["inputScale": alphaUpscaleFactor])
// Unknown...
}
}
In the "unknown" section, I am trying to determine how I would render my new "blurred" person on top of the original camera feed. There does not seem to be any methods to draw the new CIImage on "top" of the original camera feed, as the ARView has no way of being manually updated.