What format/syntax is needed for ffmpeg to output the same input to several different "output" files? For instance different formats/different bitrates? Does it support parallelism on the output?
问题:
回答1:
The ffmpeg documentation has been updated with lots more information about this and options depend on the version of ffmpeg you use: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
回答2:
From FFMpeg documentation, FFmpeg writes to an arbitrary number of output "files".
Just make sure each output file (or stream), is preceded by the proper output options.
回答3:
Is there any reason you can't just run more than one instance of ffmpeg
? I've have great results with that ...
Generally what I've done is run ffmpeg
once on the source file to get it to sort of the base standard (say a higher quality h.264 mp4 file) this will make sure your other jobs will run more quickly if your source file has any issues since they'll be cleaned up in this first pass
Then use that new source/input file to run x number of ffmpeg
jobs, for example in bash ...
Where you see "..." would be where you'd put all your encoding options.
# create 'base' file
ffmpeg -loglevel error -er 4 -i $INPUT_FILE ... INPUT.mp4 >> $LOG_FILE 2>&1
# the command above will run and then move to start 3 background jobs
# text output will be sent to a log file
echo "base file done!"
# note & at the end to send job to the background
ffmpeg ... -i INPUT.mp4 ... FILENAME1.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME2.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME3.mp4 ... >/dev/null 2>&1 &
# wait until you have no more background jobs running
wait > 0
echo "done!"
Each of the background jobs will run in parallel and will be (essentially) balanced over your cpus, so you can maximize each core.
回答4:
based on http://sonnati.wordpress.com/2011/08/30/ffmpeg-–-the-swiss-army-knife-of-internet-streaming-–-part-iv/ and http://ffmpeg-users.933282.n4.nabble.com/Multiple-output-files-td2076623.html
ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640×360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480×272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320×200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k
Or you could pipe the output to a "tee" and send it to "X" other processes to actually do the encoding, like
ffmpeg -i input - | tee ...
which might save cpu since it might enable more output parallelism, which is apparently otherwise unavailable
see http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs and here