What I want is straightforward: wrap H.264 video stream into a FLV container. However, ffmpeg just decode the input stream and pack raw video stream into FLV. The details are described below:
The input stream is captured from a hardware-encoder video camera, and the FLV will be sent to some video server. Firstly I used following command:
$ ffmpeg -framerate 15 -s 320x240 -i /dev/video1 -f flv "rtmp://some.website.com/receive/path"
However, the resultant stream is suspicious. The watching side don't get any H.264 thing. Then I made a test by writing output to local files.
1: Read raw stream, encode by h264_omx, write to FLV file:
$ ffmpeg -framerate 15 -s 320x240 -i /dev/video0 -codec h264_omx -f flv raw_input_h264_omx.flv
......
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 194017.870905, bitrate: 18432 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 320x240, 18432 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (h264_omx))
......
2: Read H264 stream, write to FLV file:
$ ffmpeg -framerate 15 -s 320x240 -i /dev/video1 -f flv h264_input.flv
......
Input #0, video4linux2,v4l2, from '/dev/video1':
Duration: N/A, start: 194610.307096, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 320x240, 15 fps, 15 tbr, 1000k tbn, 2000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> flv1 (flv))
......
Then read the two files correspondingly:
$ ffmpeg -i raw_input_h264_omx.flv
......
Stream #0:0: Video: h264 (High), yuv420p(progressive), 320x240, 200 kb/s, 15 fps, 15 tbr, 1k tbn
$ ffmpeg -i h264_input.flv
......
Stream #0:0: Video: flv1, yuv420p, 320x240, 200 kb/s, 15 fps, 15 tbr, 1k tbn
It is clear when I give a H.264 stream, ffmpeg firstly decodes it, then pack the raw video into FLV. How to avoid that happen, and have the H.264 stream packed directly?
Supplement: I will eventually pushing multiple video streams, so don't ask me to allow ffmpeg's silent decoding, and encode the stream again.