I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession
etc. as this is pretty standard.)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
int doReverseChannels;
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat ) {
doReverseChannels = 1;
} else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat ) {
doReverseChannels = 0;
} else {
assert(false);
}
const int sourceRowBytes = (int)CVPixelBufferGetBytesPerRow( pixelBuffer );
const int width = (int)CVPixelBufferGetWidth( pixelBuffer );
const int fullHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
unsigned char* sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
int height;
unsigned char* sourceStartAddr;
if (fullHeight <= width) {
height = fullHeight;
sourceStartAddr = sourceBaseAddr;
} else {
height = width;
const int marginY = ((fullHeight - width) / 2);
sourceStartAddr = (sourceBaseAddr + (marginY * sourceRowBytes));
}
}
The network then takes sourceStartAddr
, width
, height
, sourceRowBytes
& doReverseChannels
as inputs.
My question is the following: What would be the simplest and/or most efficient way to replace or delete a part of the image data with all white 'pixels'? Is it possible to directly overwrite e portion of the pixel buffer data and if yes how?
I only have a very rudimentary understanding of how this pixel buffer works, so I apologize if I'm missing something very basic here. The question most closely related to mine I found on Stackoverflow was this one, where a EAGLContext
is used to add text to a video frame. While this would actually work for my objective which only needs this replacement for single images, I assume this step would kill performance if applied to every video frame, and I would like to find out if there is another method. Any help here would be appreciated.
Updating it with Swift implementation.
Since
baseAddress
givesUnsafeMutableRawPointer
, which does not support subscript, you have to usestoreBytes
instead. That is basically the only key difference from Objective-C version above.Here is an easy way to manipulate a
CVPixelBufferRef
without using other libraries like Core Graphics or OpenGL:This overwrites the top left patch of 100 x 100 pixels in the image with white pixels.
I found this solution in this Apple Developer Example called RosyWriter.
Kind of amazed I didn't get any answers here considering how easy this turned out to be. Hope this helps someone.
I had to process frames from the iPhone camera using captureOutput and CVPixelBuffer. I used your code (thanks!) to loop on about 200k pixels in the pixelbuffer 15 frames per second but I constantly had issues with dropped frames. It turned out that in Swift a
while
loop is 10x faster than afor ... in
loop.Like:
0.09 sec:
0.01 sec: