Huge memory leak when filtering video with libavfi

2019-09-19 08:29发布

I have a relatively simple FFMPEG C program, to which a video frame is fed, processed via filter graph and sent to frame renderer.

Here are some code snippets:

/* Filter graph here */
char args[512];
enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_RGB32 };    
AVFilterGraph   *filter_graph;
avfilter_register_all();
AVFilter *buffersrc  = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("ffbuffersink");
AVBufferSinkParams *buffersink_params;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs  = avfilter_inout_alloc();
filter_graph = avfilter_graph_alloc();

snprintf(args, sizeof(args),
        "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
        av->codec_ctx->width, av->codec_ctx->height, av->codec_ctx->pix_fmt,
        av->codec_ctx->time_base.num, av->codec_ctx->time_base.den,
        av->codec_ctx->sample_aspect_ratio.num, av->codec_ctx->sample_aspect_ratio.den);

if(avfilter_graph_create_filter(&av->buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph) < 0) 
{
    fprintf(stderr, "Cannot create buffer source\n");
    return(0);
}

 /* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;

if(avfilter_graph_create_filter(&av->buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph) < 0) 
{
    printf("Cannot create buffer sink\n");
    return(HACKTV_ERROR);
}

  /* Endpoints for the filter graph. */
    outputs->name       = av_strdup("in");
    outputs->filter_ctx = av->buffersrc_ctx;
    outputs->pad_idx    = 0;
    outputs->next       = NULL;

    inputs->name       = av_strdup("out");
    inputs->filter_ctx = av->buffersink_ctx;
    inputs->pad_idx    = 0;
    inputs->next       = NULL;

const char *filter_descr = "vflip";

    if (avfilter_graph_parse_ptr(filter_graph, filter_descr, &inputs, &outputs, NULL) < 0)
{
    printf("Cannot parse filter graph\n");
    return(0);
}

 if (avfilter_graph_config(filter_graph, NULL) < 0) 
{
    printf("Cannot configure filter graph\n");
    return(0);
}

av_free(buffersink_params);
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);

The above code is called by these elsewhere:

av->frame_in->pts = av_frame_get_best_effort_timestamp(av->frame_in);

/* push the decoded frame into the filtergraph*/
if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0) 
{
    printf( "Error while feeding the filtdergraph\n");
    break;
 }

 /* pull filtered pictures from the filtergraph */ 
 if(av_buffersink_get_frame(av->buffersink_ctx, av->frame_out) < 0) 
 {
      printf( "Error while sourcing the filtergraph\n");
       break;
  }  

/* do stuff with frame */

Now, the code works absolutely fine and the video comes out the way I expect it to (vertically flipped for testing purposes).

The biggest issue I have is that there is a massive memory leak. An high res video will consume 2Gb in a matter of seconds and crash the program. I traced the leak to this piece of code:

/* push the decoded frame into the filtergraph*/
if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0) 

If I bypass the filter by doing av->frame_out=av->frame_in; without pushing the frame into it (and obviously not pulling from it), there is no leak and memory usage is stable.

Now, I am very new to C, so be gentle, but it seems like I should be clearing out the buffersrc_ctx somehow but no idea how. I've looked in official documentations but couldn't find anything.

Can someone advise?

1条回答
走好不送
2楼-- · 2019-09-19 09:18

5 minutes after posting, looks like all I had to do was unreference frames after each one processed.

        av_frame_unref(av->frame_in);
        av_frame_unref(av->frame_out);
查看更多
登录 后发表回答