FFMPEG: While decoding video, is possible to gener

2019-02-19 17:25发布

问题:

In ffmpeg decoding video scenario, H264 for example, typically we allocate an AVFrame and decode the compressed data, then we get the result from the member data and linesize of AVFrame. As following code:

// input setting: data and size are a H264 data.
AVPacket avpkt;
av_init_packet(&avpkt);
avpkt.data = const_cast<uint8_t*>(data);
avpkt.size = size;

// decode video: H264 ---> YUV420
AVFrame *picture = avcodec_alloc_frame();
int len = avcodec_decode_video2(context, picture, &got_picture, &avpkt);

We may use the result to do something other tasks, for example, using DirectX9 to render. That is, to prepare buffers(DirectX9 Textures), and copy from the result of decoding.

D3DLOCKED_RECT lrY;
D3DLOCKED_RECT lrU;
D3DLOCKED_RECT lrV;
textureY->LockRect(0, &lrY, NULL, 0);
textureU->LockRect(0, &lrU, NULL, 0);
textureV->LockRect(0, &lrV, NULL, 0);

// copy YUV420: picture->data ---> lr.pBits.
my_copy_image_function(picture->data[0], picture->linesize[0], lrY.pBits, lrY.Pitch, width, height);
my_copy_image_function(picture->data[1], picture->linesize[1], lrU.pBits, lrU.Pitch, width / 2, height / 2);
my_copy_image_function(picture->data[2], picture->linesize[2], lrV.pBits, lrV.Pitch, width / 2, height / 2);

This process is considered that 2 copy happens(ffmpeg copy result to picture->data, and then copy picture->data to DirectX9 Texture).

My question is: is it possible to improve the process to only 1 copy ? On the other hand, can we provide buffers(pBits, the buffer of DirectX9 textures) to ffmpeg, and decode function results to buffer of DirectX9 texture, not to buffers of AVFrame ?

回答1:

I found the way out.

There is a public member of AVCodecContext, get_buffer2, which is a callback function. While calling avcodec_decode_video2, this callback function will be invoked, and this callback function is responsible to delegate buffers and some informations to AVFrame, then avcodec_decode_video2 generate the result to the buffers of AVFrame.

The callback function, get_buffer2, is set avcodec_default_get_buffer2 as default. However, we can override this as our privided function. For example:

void our_buffer_default_free(void *opaque, uint8_t *data)
{
    // empty
}
int our_get_buffer(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    assert(c->codec_type == AVMEDIA_TYPE_VIDEO);
    pic->data[0] = lrY.pBits;
    pic->data[1] = lrU.pBits;
    pic->data[2] = lrV.pBits;
    picture->linesize[0] = lrY.Pitch;
    picture->linesize[1] = lrU.Pitch;
    picture->linesize[2] = lrV.Pitch;
    pic->buf[0] = av_buffer_create(pic->data[0], pic->linesize[0] * pic->height, our_buffer_default_free, NULL, 0);
    pic->buf[1] = av_buffer_create(pic->data[1], pic->linesize[1] * pic->height / 2, our_buffer_default_free, NULL, 0);
    pic->buf[2] = av_buffer_create(pic->data[2], pic->linesize[2] * pic->height / 2, our_buffer_default_free, NULL, 0);
    return 0;
}

Before decoding, we override the callback function:

context->get_buffer2 = our_get_buffer;

Then avcodec_decode_video2 will generate the result to our provided buffers.

By the way, for C++ programs which often implementing these processes in classes, we can record this pointer first:

context->opaque = this;

And define the overridden callback function as static member:

static int myclass::my_get_buffer(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    auto this_pointer = static_cast<decode_context*>(c->opaque);
    return this_pointer->my_get_buffer_real(c, pic, flags);
}
int myclass::my_get_buffer_real(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    // ditto with above our_get_buffer.
    // ...
}