Capturing a still image from camera in OpenCV Mat

2019-04-11 21:02发布

I am developing an iOS application and trying to get a still image snapshot from the camera using capture session but I'm unable to convert it successfully to an OpenCV Mat.

The still image output is created using this code:

- (void)createStillImageOutput;
{
    // setup still image output with jpeg codec
    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [self.stillImageOutput setOutputSettings:outputSettings];
    [self.captureSession addOutput:self.stillImageOutput];

    for (AVCaptureConnection *connection in self.stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([port.mediaType isEqual:AVMediaTypeVideo]) {
                self.videoCaptureConnection = connection;
                break;
            }
        }
        if (self.videoCaptureConnection) {
            break;
        }
    }
    NSLog(@"[Camera] still image output created");
}

And then attempting to capture a still image using this code:

[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:self.videoCaptureConnection
                                                   completionHandler:
 ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
 {
     if (error == nil && imageSampleBuffer != NULL)
     {
         NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
     }

I need a way to create an OpenCV Mat based on the pixel data in the buffer. I have tried creating a Mat using this code which I have taken from OpenCV video camera class camera here:

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    void* bufferAddress;
    size_t width;
    size_t height;
    size_t bytesPerRow;

    CGColorSpaceRef colorSpace;
    CGContextRef context;

    int format_opencv;

    OSType format = CVPixelBufferGetPixelFormatType(imageBuffer);
    if (format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {

        format_opencv = CV_8UC1;

        bufferAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
        width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
        height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
        bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);

    } else { // expect kCVPixelFormatType_32BGRA

        format_opencv = CV_8UC4;

        bufferAddress = CVPixelBufferGetBaseAddress(imageBuffer);
        width = CVPixelBufferGetWidth(imageBuffer);
        height = CVPixelBufferGetHeight(imageBuffer);
        bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

    }

    cv::Mat image(height, width, format_opencv, bufferAddress, bytesPerRow);

but it fails to get the actual height and width of the image on the CVPixelBufferGetWidth or CVPixelBufferGetHeight call and so the creation of Mat fails.

I'm aware that I can create a UIImage based on the pixel data using this code:

             UIImage* newImage = [UIImage imageWithData:jpegData];

But I prefer to construct a CvMat directly as it is the case in the OpenCV CvVideoCamera class because I'm interested in handling the image in OpenCV only and I don't want to consume time converting again and not risk losing quality or having issues with orientation (anyway the UIImagetoCV conversion function provided by OpenCV is causing me memory leaks and not freeing the memory).

Please advise how can I get the image as an OpenCV Mat. Thanks in advance.

1条回答
Fickle 薄情
2楼-- · 2019-04-11 21:10

I have found the solution for my problem.

the Solution is :

by overriding this method in the OpenCv Camera Class : "createVideoPreviewLayer"

it should look like this :

- (void)createVideoPreviewLayer;
{

    self.parentView.layer.sublayers = nil;
    if (captureVideoPreviewLayer == nil) {
        captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]
        initWithSession:self.captureSession];
    }

    if (self.parentView != nil) {
        captureVideoPreviewLayer.frame = self.parentView.bounds;
        captureVideoPreviewLayer.videoGravity =
        AVLayerVideoGravityResizeAspectFill;
        [self.parentView.layer addSublayer:captureVideoPreviewLayer];
    }
    NSLog(@"[Camera] created AVCaptureVideoPreviewLayer");
}

you should add this line to the "createVideoPreviewLayer" method will solve the problem :

self.parentView.layer.sublayers = nil;

and you need to use the pause method instead of the stop method

查看更多
登录 后发表回答