Raw image data from camera like “645 PRO”

2019-01-07 14:42发布

问题:

A time ago I already asked this question and I also got a good answer:

I've been searching this forum up and down but I couldn't find what I really need. I want to get raw image data from the camera. Up till now I tried to get the data out of the imageDataSampleBuffer from that method captureStillImageAsynchronouslyFromConnection:completionHandler: and to write it to an NSData object, but that didn't work. Maybe I'm on the wrong track or maybe I'm just doing it wrong. What I don't want is for the image to be compressed in any way.

The easy way is to use jpegStillImageNSDataRepresentation: from AVCaptureStillImageOutput, but like I said I don't want it to be compressed.

Thanks!

Raw image data from camera

I thought I could work with this, but I finally noticed that I need to get raw image data more directly in a similar way as it is done in "645 PRO".

645 PRO: RAW Redux

The pictures on that site show that they get the raw data before any jpeg compression is done. That is what I want to do. My guess is that I need to transform imageDataSampleBuffer but I don't see a way to do it completely without compression. "645 PRO" also saves its pictures in TIFF so I think it uses at least one additional library.

I don't want to make a photo app but I need the best quality I get to check for certain features in a picture.

Thanks!

Edit 1: So after trying and searching in different directions for a while now I decided to give a status update.

The final goal of this project is to check for certain features in a picture which will happen with the help of opencv. But until the app is able to do it on the phone I'm trying to get mostly uncompressed pictures out of the phone to analyse them on the computer.

Therefore I want to save the "NSData instance containing the uncompressed BGRA bytes returned from the camera" I'm able to get with Brad Larson's code as bmp or TIFF file. As I said in a comment I tried using opencv for this (it will be needed anyway). But the best I could do was turning it into a UIImage with a function from Computer Vision Talks.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);
UIImage *testImag = [UIImage imageWithMat:frame andImageOrientation:UIImageOrientationUp];
//imageWithMat... being the function from Computer Vision Talks which I can post if someone wants to see it

ImageMagick - Approach

Another thing I tried was using ImageMagick as suggested in another post. But I couldn't find a way to do it without using something like UIImagePNGRepresentationor UIImageJPEGRepresentation.

For now I'm trying to do something with libtiff using this tutorial.

Maybe someone has an idea or knows a much easier way to convert my buffer object into an uncompressed picture. Thanks in advance again!

Edit 2:

I found something! And I must say I was very blind.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"ocv%d.TIFF", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];

const cv::string newPaths = (const cv::string)cPath;

cv::imwrite(newPaths, frame);

I just have to use the imwrite function from opencv. This way I get TIFF-files around 30 MB directly after the beyer-Polarisation!

回答1:

Wow, that blog post was something special. A whole lot of words to just state that they get the sample buffer bytes that Apple hands you back from a still image. There's nothing particularly innovative about their approach, and I know a number of camera applications that do this.

You can get at the raw bytes returned from a photo taken with a AVCaptureStillImageOutput using code like the following:

[photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
    NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
    // Do whatever with your bytes

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}];

This will give you an NSData instance containing the uncompressed BGRA bytes returned from the camera. You can save these to disk or do whatever you want with them. If you really need to process the bytes themselves, I'd avoid the overhead of the NSData creation and just work with the byte array from the pixel buffer.



回答2:

I could solve it with OpenCV. Thanks to everyone who helped me.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"ocv%d.BMP", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];

const cv::string newPaths = (const cv::string)cPath;

cv::imwrite(newPaths, frame);

I just have to use the imwrite function from opencv. This way I get BMP-files around 24 MB directly after the bayer-filter!



回答3:

While the core of the answer comes from Brad at iOS: Get pixel-by-pixel data from camera, a key element is completely unclear from Brad's reply. It's hidden in "once you have your capture session configured...".

You need to set the correct outputSettings for your AVCaptureStillImageOutput.

For example, setting kCVPixelBufferPixelFormatTypeKey to kCVPixelFormatType_420YpCbCr8BiPlanarFullRange will give you a YCbCr imageDataSampleBuffer in captureStillImageAsynchronouslyFromConnection:completionHandler:, which you can then manipulate to your heart's content.



回答4:

as @Wildaker mentioned, for a specific code to work you have to be sure which pixel format the camera is sending you. The code from @thomketler will work if it's set for 32-bit RGBA format.

Here is a code for the YUV default from camera, using OpenCV:

cv::Mat convertImage(CMSampleBufferRef sampleBuffer)
{
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);

    int w = (int)CVPixelBufferGetWidth(cameraFrame);
    int h = (int)CVPixelBufferGetHeight(cameraFrame);
    void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0);

    cv::Mat img_buffer(h+h/2, w, CV_8UC1, (uchar *)baseAddress);
    cv::Mat cam_frame;
    cv::cvtColor(img_buffer, cam_frame, cv::COLOR_YUV2BGR_NV21);
    cam_frame = cam_frame.t();

    //End processing
    CVPixelBufferUnlockBaseAddress( cameraFrame, 0 );

    return cam_frame;
}

cam_frame should have the full BGR frame. I hope that helps.