从相机像“645 PRO”原始图像数据(Raw image data from camera lik

2019-06-17 13:05发布

A time ago I already asked this question and I also got a good answer:

I've been searching this forum up and down but I couldn't find what I really need. I want to get raw image data from the camera. Up till now I tried to get the data out of the imageDataSampleBuffer from that method captureStillImageAsynchronouslyFromConnection:completionHandler: and to write it to an NSData object, but that didn't work. Maybe I'm on the wrong track or maybe I'm just doing it wrong. What I don't want is for the image to be compressed in any way.

The easy way is to use jpegStillImageNSDataRepresentation: from AVCaptureStillImageOutput, but like I said I don't want it to be compressed.

Thanks!

Raw image data from camera

I thought I could work with this, but I finally noticed that I need to get raw image data more directly in a similar way as it is done in "645 PRO".

645 PRO: RAW Redux

The pictures on that site show that they get the raw data before any jpeg compression is done. That is what I want to do. My guess is that I need to transform imageDataSampleBuffer but I don't see a way to do it completely without compression. "645 PRO" also saves its pictures in TIFF so I think it uses at least one additional library.

I don't want to make a photo app but I need the best quality I get to check for certain features in a picture.

Thanks!

Edit 1: So after trying and searching in different directions for a while now I decided to give a status update.

The final goal of this project is to check for certain features in a picture which will happen with the help of opencv. But until the app is able to do it on the phone I'm trying to get mostly uncompressed pictures out of the phone to analyse them on the computer.

Therefore I want to save the "NSData instance containing the uncompressed BGRA bytes returned from the camera" I'm able to get with Brad Larson's code as bmp or TIFF file. As I said in a comment I tried using opencv for this (it will be needed anyway). But the best I could do was turning it into a UIImage with a function from Computer Vision Talks.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);
UIImage *testImag = [UIImage imageWithMat:frame andImageOrientation:UIImageOrientationUp];
//imageWithMat... being the function from Computer Vision Talks which I can post if someone wants to see it

ImageMagick - Approach

Another thing I tried was using ImageMagick as suggested in another post. But I couldn't find a way to do it without using something like UIImagePNGRepresentationor UIImageJPEGRepresentation.

For now I'm trying to do something with libtiff using this tutorial.

Maybe someone has an idea or knows a much easier way to convert my buffer object into an uncompressed picture. Thanks in advance again!

Edit 2:

I found something! And I must say I was very blind.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"ocv%d.TIFF", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];

const cv::string newPaths = (const cv::string)cPath;

cv::imwrite(newPaths, frame);

I just have to use the imwrite function from opencv. This way I get TIFF-files around 30 MB directly after the beyer-Polarisation!

Answer 1:

哇,这博客文章很特别。 一大堆的话,只是声称,他们得到样品缓冲区的字节,苹果手中您从静止图像回来。 没有什么特别的创新对他们的做法是,我知道一个数字,这样做的相机应用。

你可以从一个AVCaptureStillImageOutput使用如下代码合影留念返回原始字节:

[photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
    NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
    // Do whatever with your bytes

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}];

这会给你包含未压缩BGRA从相机返回的字节一个NSData实例。 您可以将这些保存到磁盘或做任何你想要他们。 如果你真的需要处理的字节自己,我会避免NSData的创建的开销,只是从像素缓冲区的字节数组工作。



Answer 2:

我可以跟OpenCV的解决这个问题。 谢谢大家谁帮助了我。

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"ocv%d.BMP", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];

const cv::string newPaths = (const cv::string)cPath;

cv::imwrite(newPaths, frame);

我只需要使用imwrite功能从OpenCV的。 这样,我的拜尔过滤后得到BMP-文件直接约24 MB!



Answer 3:

虽然回答的核心来自布拉德的iOS:从相机进行逐像素数据 ,一个关键因素是布拉德的回答完全不清楚。 它隐藏在“一旦你有你的配置捕获会话...”。

您需要设置正确的outputSettings您AVCaptureStillImageOutput

例如,设置kCVPixelBufferPixelFormatTypeKeykCVPixelFormatType_420YpCbCr8BiPlanarFullRange会给你的YCbCr imageDataSampleBuffercaptureStillImageAsynchronouslyFromConnection:completionHandler:然后你就可以操纵你的心脏的内容。



Answer 4:

作为@Wildaker提到的,对于一个特定的代码来工作,你必须确保它的像素格式相机送你。 如果它被设置为32位RGBA格式从@thomketler的代码将工作。

下面是从相机的YUV默认情况下,使用OpenCV的代码:

cv::Mat convertImage(CMSampleBufferRef sampleBuffer)
{
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);

    int w = (int)CVPixelBufferGetWidth(cameraFrame);
    int h = (int)CVPixelBufferGetHeight(cameraFrame);
    void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0);

    cv::Mat img_buffer(h+h/2, w, CV_8UC1, (uchar *)baseAddress);
    cv::Mat cam_frame;
    cv::cvtColor(img_buffer, cam_frame, cv::COLOR_YUV2BGR_NV21);
    cam_frame = cam_frame.t();

    //End processing
    CVPixelBufferUnlockBaseAddress( cameraFrame, 0 );

    return cam_frame;
}

cam_frame应该有充分的BGR框架。 我希望帮助。



文章来源: Raw image data from camera like “645 PRO”