I've got a grayscale video stream coming off a Firewire astronomy camera, I'd like to use FFmpeg to compress the video stream but it will not accept single byte pixel formats for the MPEG1VIDEO codecs. How can I use the FFmpeg API to convert grayscale video frames into a frame format accepted by FFmpeg?
相关问题
- iOS (objective-c) compression_decode_buffer() retu
- Improving accuracy of Google Cloud Speech API
- Is there a public implemention of LZSS compression
- Reading and Compressing a picture with RLE
- MP4 codec support in Chromium
相关文章
- Handling ffmpeg library interface change when upgr
- How to use a framework build of Python with Anacon
- c++ mp3 library [closed]
- Passing a native fd int to FFMPEG from openable UR
- c# saving very large bitmaps as jpegs (or any othe
- FFmpeg - What does non monotonically increasing dt
- ffmpeg run from shell runs properly, but does not
- Issues in executing FFmpeg command in Java code in
The Relationship of Gray scale and YUV is very simple - The "Y" of YUV is exactly same as Grayscale.
The simpler way to convert the Grayscale in to YUV is
See this reference for Conversion between the scales :
Accordingly :
You can actually use Y[i] = W[i] that will be almost same. The 128 value represents '0' in a scaled/shifted base of 0-256 signed to unsigned conversion.
So all you need to keep is create this other memory area Y and U as the fixed value and provide this frames to ffmpeg.
I am not sure, but by appropriately telling FFMPEG, it does this inside only. The RGB values you supply are equally covered there as well; which is also not native to MPEG.
So look for FFMPEG's API which is able to let you do this.
BONUS
Remember, that in good old days there were Black and white (Gray scale) TV sets. The new color TV sets, needed to be compliant to the old ones, so color information is added in the form of U and V (sometimes it is also called YCbCr - where Cb and Cr are called chroma and are respectively linear variations of UV in this context).
Edit
MPEG-1 only accepts YUV. So convert your frame to yuv. Use the SwsContext structure, create it by calling sws_getContext, and then use sws_scale.
Try the rawvideo codec. You will need to specify the pix_fmt parameter which describes the format of your frames - yours are 1-byte per pixel frames, but are they grayscale (you didn't mention)? For example
Here pix_fmt specifies yuv420p which is not what you need. Use the appropriate frame type for you.
I'll post the contents of the header file for the pix_fmt values. Try to see if your frame type is defined in it. Look at PIX_FMT_RGB8 (which is 8-bits) for example.
It works if you simply use "hue" filter; like this:
ffmpeg -i inputfile.ogv -vf hue=s=0 outputfile-unsat.ogv