feed raw yuv frame to ffmpeg with timestamp

2019-09-04 05:53发布

I've trying pipe audio and video raw data to ffmpeg and push realtime stream through RTSP protocol on android. the command-line is look like this

"ffmpeg -re -f image2pipe -vcodec mjpeg -i "+vpipepath
+ " -f s16le -acodec pcm_s16le -ar 8000 -ac 1 -i - "
+ " -vcodec libx264 "
+ " -preset slow -pix_fmt yuv420p -crf 30 -s 160x120 -r 6 -tune film "
+ " -g 6 -keyint_min 6 -bf 16 -b_strategy 1 "
+ " -acodec libopus -ac 1 -ar 48000 -b:a 80k -vbr on -frame_duration 20 "
+ " -compression_level 10 -application voip -packet_loss 20 "
+ " -f rtsp rtsp://remote-rtsp-server/live.sdp";

I'm using libx264 for video codec and libopus for audio codec. the yuv frames are feed through a named pipe created by mkfifo, the pcm frames are feed through stdin.

It works, and I can fetch and play the stream by ffplay. But there is serverely audio/video sync issue. Audio is 5~10 seconds later than video. I guess the problem is both yuv frame and pcm frame doesn't have any timestamp on them. FFmpeg add timestamp when it feed with the data. but audio/video capture thread is impossible to run at the same rate. Is there a way to add timestamp to each raw data frame? (something like PST/DST?)

the way I used was from this thread: Android Camera Capture using FFmpeg

1条回答
贪生不怕死
2楼-- · 2019-09-04 06:13

FFmpeg adds timestamps the moment it retrieves the samples from the pipe, so all you need to do is feed them in sync. Likely problem in your case is that you already have an audio buffer, and are offering video frames in real time. That makes audio late. You must buffer video frames to the same amount of time as you are buffering audio. If you have no control over your audio buffer size, then try to keep it as small as possible, monitor its size and adjust your video buffering accordingly.

查看更多
登录 后发表回答