I am trying to use FFMpeg to take a stream of image files and turn them into a video. Now, I have successfully done this, but only after I have already captured all the images that I want. What I would like to do is turn the images into a video as they are saved to disk (real-time video recorder). Currently, when I call FFMpeg while frames are still being grabbed, it only encodes the number of images that are present when it is called. If FFMpeg is called every time an image is grabbed, it floods the CPU with a ton of processes. Ideally, FFMpeg would continue to encode the images until there are no more images being captured (I.E., check if there are more image files since it was first called). Is there an argument for FFMpeg that I'm missing, or is this not possible? Or is the only way to do this through messing around with the libraries?
相关问题
- Unity - Get Random Color at Spawning
- Unity3D WebGL Headless not rendering
- Improving accuracy of Google Cloud Speech API
- Unity3D loading resources after build
- Load Image from Stream/StreamReader to Image OR Ra
相关文章
- Programmatically setting and saving the icon assoc
- Omnisharp in VS Code produces a lot of warnings ab
- Handling ffmpeg library interface change when upgr
- How to use a framework build of Python with Anacon
- Call non-static methods on custom Unity Android Pl
- c++ mp3 library [closed]
- Passing a native fd int to FFMPEG from openable UR
- How can a game created in Unity can run on an Andr
One solution but not one I like is: Encode a base video in Mpeg2 VOB. Every X frames encode a new video of the last X frames. As VOB files have no file headers you can them just append the binary of the new file to the existing VOB. FFMPEG would only need to run on a few frames. Some other video formats might work too.