Replace Part of Pixel Buffer with White Pixels in

2020-06-05 05:48发布

I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.)

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
    int doReverseChannels;
    if ( kCVPixelFormatType_32ARGB == sourcePixelFormat ) {
        doReverseChannels = 1;
    } else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat ) {
        doReverseChannels = 0;
    } else {
        assert(false);
    }

    const int sourceRowBytes = (int)CVPixelBufferGetBytesPerRow( pixelBuffer );
    const int width = (int)CVPixelBufferGetWidth( pixelBuffer );
    const int fullHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
    unsigned char* sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
    int height;
    unsigned char* sourceStartAddr;
    if (fullHeight <= width) {
        height = fullHeight;
        sourceStartAddr = sourceBaseAddr;
    } else {
        height = width;
        const int marginY = ((fullHeight - width) / 2);
        sourceStartAddr = (sourceBaseAddr + (marginY * sourceRowBytes));
    }
}

The network then takes sourceStartAddr, width, height, sourceRowBytes & doReverseChannels as inputs.

My question is the following: What would be the simplest and/or most efficient way to replace or delete a part of the image data with all white 'pixels'? Is it possible to directly overwrite e portion of the pixel buffer data and if yes how?

I only have a very rudimentary understanding of how this pixel buffer works, so I apologize if I'm missing something very basic here. The question most closely related to mine I found on Stackoverflow was this one, where a EAGLContext is used to add text to a video frame. While this would actually work for my objective which only needs this replacement for single images, I assume this step would kill performance if applied to every video frame, and I would like to find out if there is another method. Any help here would be appreciated.

3条回答
女痞
2楼-- · 2020-06-05 06:22

Updating it with Swift implementation.

        CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
        let bufferWidth = Int(CVPixelBufferGetWidth(pixelBuffer))
        let bufferHeight = Int(CVPixelBufferGetHeight(pixelBuffer))
        let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)

        guard let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer) else {
                return
        }

        for row in 0..<bufferHeight {
            var pixel = baseAddress + row * bytesPerRow
            for col in 0..<bufferWidth {
                let blue = pixel
                blue.storeBytes(of: 255, as: UInt8.self)

                let red = pixel + 1
                red.storeBytes(of: 255, as: UInt8.self)

                let green = pixel + 2
                green.storeBytes(of: 255, as: UInt8.self)

                let alpha = pixel + 3
                alpha.storeBytes(of: 255, as: UInt8.self)


                pixel += 4;
            }

        }

        CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

Since baseAddress gives UnsafeMutableRawPointer, which does not support subscript, you have to use storeBytes instead. That is basically the only key difference from Objective-C version above.

查看更多
神经病院院长
3楼-- · 2020-06-05 06:26

Here is an easy way to manipulate a CVPixelBufferRef without using other libraries like Core Graphics or OpenGL:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    const int kBytesPerPixel = 4;
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
    int bufferWidth = (int)CVPixelBufferGetWidth( pixelBuffer );
    int bufferHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow( pixelBuffer );
    uint8_t *baseAddress = CVPixelBufferGetBaseAddress( pixelBuffer );

    for ( int row = 0; row < bufferHeight; row++ )
    {
        uint8_t *pixel = baseAddress + row * bytesPerRow;
        for ( int column = 0; column < bufferWidth; column++ )
        {
            if ((row < 100) && (column < 100) {
                pixel[0] = 255; // BGRA, Blue value
                pixel[1] = 255; // Green value
                pixel[2] = 255; // Red value
            }
            pixel += kBytesPerPixel;
        }
    }

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

    // Do whatever needs to be done with the pixel buffer
}

This overwrites the top left patch of 100 x 100 pixels in the image with white pixels.

I found this solution in this Apple Developer Example called RosyWriter.

Kind of amazed I didn't get any answers here considering how easy this turned out to be. Hope this helps someone.

查看更多
ゆ 、 Hurt°
4楼-- · 2020-06-05 06:37

I had to process frames from the iPhone camera using captureOutput and CVPixelBuffer. I used your code (thanks!) to loop on about 200k pixels in the pixelbuffer 15 frames per second but I constantly had issues with dropped frames. It turned out that in Swift a while loop is 10x faster than a for ... in loop.

Like:

0.09 sec:

   for row in 0..<bufferHeight {

        for col in 0..<bufferWidth {
          // process pixels

0.01 sec:

    var x = 0
    var y = 0

    while y < bufferHeight
    {
        y += 1
        x = 0;
        while x < bufferWidth
        {
        // process pixels 
        }
     }
查看更多
登录 后发表回答