How to write frames to a video file?

2020-02-26 10:45发布

问题:

I am currently writing an application that read frames from camera, modify them, and save them into a video file. I'm planing to do it with ffmpeg. There's rarely a documentation about ffmpeg. I can't find a way. Does any know how to do it?

I need it to be done on unix, and in C or C++. Does any can provide some instructions?

Thanks.

EDIT:

Sorry, I haven't write clearly. I want some developer APIs to write frames to a video file. I open up camera stream, I get every single frame, then I save them into a video file with those APIs available in ffmpeg's public apis. So using command line tool actually doesn't help me. And I've seen output_example.c under the ffmpeg src folder. It's pretty great that I may copy some parts of the code directly without change. And I am still looking for a easier way.

Also, I'm thinking of porting my app to iPhone, as far as I know, only ffmpeg has been ported on iPhone. GStreamer is based on glib, and it's all GNU stuff. I'm not sure if I can get it work on iPhone. So ffmpeg is still the best choice for now.

Any comments is appreciated.

回答1:

This might help get you started - the documentation is available, but newer features tend to be documented in ffmpeg's man pages.

The frames need to be numbered sequentially.

ffmpeg -f image2 -framerate 25 -i frame_%d.jpg -c:v libx264 -crf 22 video.mp4
  • -f defines the format
  • -framerate defines the frame rate
  • -i defines the input file/s ... %d specifies numbered files .. add 0's to specify padding, e.g. %05d for zero-padded five-digit numbers.
  • -vcodec selects the video codec
  • -crf specifies a rate control method, used to define how the x264 stream is encoded
  • video.mp4 is the output file

For more info, see the Slideshow guide.



回答2:

If other solutions than ffmpeg are feasible for you, you might want to look at GStreamer. I think it might be just the right thing for your case, and there's quite some documentation out there.



回答3:

You can do what you require without using a library, as in unix you can pipe RGBA data into a program, so you can do:

In your program:

char myimage[640*480*4];
// read data into myimage
fputs(myimage,1,640*480*4,stdout);

And in a script that runs your program:

./myprogram | \
mencoder /dev/stdin -demuxer rawvideo -rawvideo w=640:h=480:fps=30:format=rgba \
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=9000000 \
-oac copy -o output.avi

I believe you can also use ffmpeg this way, or x264. You can also start the encoder from within your program, and write to a pipe (making the whole process as simple if you were using a library).

While not quite what you want, and not suitable for iPhone development, it does have the advantage that Unix will automatically use a second processor for the encoding.