I am currently developing an application that makes capturing video from a webcam on Linux using the Qt Designer tool and V4L2 and ffmpeg libraries under C + +, to capture the image there is no problem using lib V4L2, and since That a picture is ready I send it to the encoder which is based on ffmpeg libs, initially the encoder creates a video file, and it receives images to encode it in this file, my problem is as follows: the encoding is normally done, but after if I start playing the recorded video file, speed appears to be accelerated compared to the regular speed! So what is clear, the problem is in video encoding, my question is, is there a method or function that manages ffmpeg encoding speed pictures ????? thank you for your help.
相关问题
- Improving accuracy of Google Cloud Speech API
- MP4 codec support in Chromium
- Streaming video (C# using FFmpeg AutoGen) sends mu
- How do I decode opus file to pcm file using libavc
- ffmpeg for Android: neon build has text relocation
相关文章
- Handling ffmpeg library interface change when upgr
- How to use a framework build of Python with Anacon
- c++ mp3 library [closed]
- Passing a native fd int to FFMPEG from openable UR
- FFmpeg - What does non monotonically increasing dt
- ffmpeg run from shell runs properly, but does not
- Issues in executing FFmpeg command in Java code in
- Terminal text becomes invisible after terminating
When creating a custom encoded video with FFMpeg, you'll actually need to set the PTS on each AVPacket that gets written to the output file. Setting the time_base of your AVCodecContext will only tell the container what to expect. The PTS (Presentation Time Stamp) tells the decoder (when you view your video) when to actually display that particular frame.
For example:
I have an AVFrame that i got from the V4L2 part of FFMpeg. To start, its safer to make a copy of this image using av_picture_copy. (So the encoder does not look at all the extra info in the AVFrame struct.)
now set the pts based on num of encoded frames
now encode
NOW Create an AVPacket and set the timestamp again, after rescaling
Finally, you can write the packet out
I know this works for H264 video encoding, but I'm not 100% sure it works for other types of video, since i was only concerned with H264 when I wrote this.
When you create the ffmpeg file you have to specify the frame rate depends on which library you are using but look for something like
Otherwise if you are generating 10-15fps from a webcam but the default in the file is 30 fps it will play back to fast.
See http://code.google.com/p/qtffmpegwrapper/ for a Qt ffmpeg wrapper
I think you need to add time stamps to your pictures.
FFMpeg will do the encoding/decoding as fast as possible. You need to write a synchronization yourself. Usually in video decoding and playback you have timestamps attached to your frames or at least you can create some with the audio clock and your frame rate.
But this highly depends on how you want to sync and how you implemented it.
Maybe the tutorial of FFMpeg gives you some additionl hints