iOS6 : How to use the conversion feature of YUV to

2019-03-27 00:31发布

From iOS6, Apple has given the provision to use native YUV to CIImage through this call

initWithCVPixelBuffer:options:

In the core Image Programming guide, they have mentioned about this feature

Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing algorithms expect RBGA data. There is a cost to converting between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color transform.

options = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };

But, I am unable to use it properly. I have a raw YUV data. So, this is what i did

                void *YUV[3] = {data[0], data[1], data[2]};
                size_t planeWidth[3] = {width, width/2, width/2};
                size_t planeHeight[3] = {height, height/2, height/2};
                size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
                CVPixelBufferRef pixelBuffer = NULL;
                CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
                               width, 
                               height,
                               kCVPixelFormatType_420YpCbCr8PlanarFullRange, 
                               nil,
                               width*height*1.5,
                               3, 
                               YUV,
                               planeWidth,
                               planeHeight, 
                               planeBytesPerRow, 
                               nil,
                               nil, nil, &pixelBuffer); 

    NSDict *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                        @(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };

CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];

I am getting nil for image. Anyy idea what I am missing.

EDIT: I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.

 CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];
 CVPixelBufferUnlockBaseAddress(pixelBuffer,0);

2条回答
趁早两清
2楼-- · 2019-03-27 00:51

I am working on a similar problem and kept finding that same quote from Apple without any further information on how to work in a YUV color space. I came upon the following:

By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.) With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.

I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space yet, but will certainly report here if I find out.

查看更多
劳资没心,怎么记你
3楼-- · 2019-03-27 01:10

There should be error message in console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed. See Apple's Technical Q&A QA1781 for how to create an IOSurface-backed CVPixelBuffer.

Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed...

...To do that, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating the pixel buffer using CVPixelBufferCreate().

NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
    nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey,     kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.

CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes,  &pixelBuffer);

Alternatively, you can make IOSurface-backed CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool, if the pixelBufferAttributes dictionary provided to CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey.

查看更多
登录 后发表回答