I am referring to this source code . The code snippets provided here are from lines (114-138) in the code . This is using the ffmpeg library . Can anyone explain why is the following code required in the program ?
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
In a sense I understand that the following function is associating the destination frame to the buffer . But what is the necessity ?
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
PS : I tried removing the buffer and compiling the program . It got compiled . But it is showing the following run time error .
[swscaler @ 0xa06d0a0] bad dst image pointers
Segmentation fault (core dumped)
ffmpeg store the frame's pixel data in specific order within framebuffer. the storage depends on picture format (YUV, RGB,).
avpicture_fill() => this function takes the raw buffer and set various pointers of AVPicture structure.
I think that what puzzles you is that there seem to be two allocations for AVFrame.
The first, done with
avcodec_alloc_frame()
, allocates the space for a generic frame and its metadata. At this point the memory required to hold the frame proper is still unknown.You then populate that frame from another source, and it is then that you specify how much memory you need by passing
width
,height
and color depth:At this point the frame and its content are two separate objects (an AVFrame and its buffer). You put them together with this code, which is not actually a conversion at all:
What the code above does is to "tell"
pFrameRGB
: " you are a RGB-24 frame, this wide, this tall, and the memory you need is in 'buffer' ".Then and only then you can do whatever you want with
pFrameRGB
. Otherwise, you try to paint on a frame without the canvas, and the paint splashes down -- you get a core dump.Once you have the frame (AVFrame) and the canvas (the buffer), you can use it:
The above code extracts a video frame and decodes it into
pFrame
(which is native format). We could savepFrame
to disk at this stage. We would not needbuffer
, and we could then not usepFrameRGB
.Instead we convert the frame to RGB-24 using
sws_scale()
.To convert a frame into another format, we copy the source to a different destination. This is both because the destination frame could be bigger than what can be accommodated by the source frame, and because some conversion algorithms need to operate on larger areas of the untransformed source, so it would be awkward to transmogrify the source in-place. Also, the source frame is handled by the library and might conceivably not be safe to write to.
Update (comments)
What does the
data[]
of pFrame/pFrameRGB point to: initially, nothing. They are NULL, and that is why using a noninitialized AVframe results in a core dump. You initialize them (andlinesize[]
etc.) usingavpicture_fill
(that fits in an empty buffer, plus image format and size information) or one of the decode functions (which do the same).Why does pFrame not require memory allocation: good question. The answer is in the used function's prototype and layout, where the picture parameter is described thus:
ffmpeg or similar libraries will not do an inplace buffer conversion. First thing it is about not loosing the original data and speed of working with different buffers. Secondly, if you do a lot of conversions, you can allocate the needed buffer beforehand.