Create CVPixelBuffer from YUV with IOSurface backe

2019-02-01 19:07发布

So I am getting raw YUV data in 3 separate arrays from a network callback (voip app). From what I understand you cannot create IOSurface backed pixel buffers with CVPixelBufferCreateWithPlanarBytes according to here

Important: You cannot use CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() with kCVPixelBufferIOSurfacePropertiesKey. Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed

So thus you have to create it with CVPixelBufferCreate, but how do you transfer the data from the call back to the CVPixelBufferRef that you create?

- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
                      size_t uStride, size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

I am unsure what to do afterwards here? Eventually I want to turn this into a CIImage which then I can use my GLKView to render the video. How do people "put" the data into the buffers from when you create it?

2条回答
放我归山
2楼-- · 2019-02-01 19:41

I had a similar question and here is what I have in SWIFT 2.0 with informations that I got from answers to others questions or links.

func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
    var uIndex: Int
    var vIndex: Int
    var uvDataIndex: Int
    var pixelBuffer: CVPixelBufferRef? = nil
    var err: CVReturn;

    if (m_pixelBuffer == nil)
    {
        err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
        if (err != 0) {
            NSLog("Error at CVPixelBufferCreate %d", err)
            return nil
        }
    }

    if (pixelBuffer != nil)
    {
        CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
        let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
        if (yBaseAddress != nil)
        {
            let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
            let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)

            // Y-plane data
            memcpy(yData, yDataPtr, yuvFrame.luma.length)
        }

        let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
        if (uvBaseAddress != nil)
        {
            let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
            let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
            let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)

            // For the uv data, we need to interleave them as uvuvuvuv....
            let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
            let iHalfWidth = yuvFrame.width/2

            for i in 0..<iuvRow
            {
                for j in 0..<(iHalfWidth)
                {
                    // UV data for original frame.  Just interleave them.
                    uvDataIndex = i*iHalfWidth+j
                    uIndex = (i*yuvFrame.width) + (j*2)
                    vIndex = uIndex + 1
                    uvData[uIndex] = pUPointer[uvDataIndex]
                    uvData[vIndex] = pVPointer[uvDataIndex]
                }
            }
        }
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
    }

    return pixelBuffer
}

Note: yuvFrame is a structure with y, u, and v plan buffers and width and height. Also, I have the CFDictionary? parameter in the CVPixelBufferCreate(...) set to nil. If I give it IOSurface attribute, it will fail and complain that it's not IOSurface-backed or error -6683.

Visit these links for more information: This link is about UV interleave: How to convert from YUV to CIImage for iOS

and related question: CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683

查看更多
孤傲高冷的网名
3楼-- · 2019-02-01 19:52

I figured it out and it was fairly trivial. Here is the full code below. Only issue is that I get a BSXPCMessage received error for message: Connection interrupted and it takes a while for the video to show.

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

I forgot to add the code to interleave the two U and V planes, but that shouldn't be too bad.

查看更多
登录 后发表回答