I need to decode a video into a sequence of bitmaps, such that I am able to modify them, and then compress them back to a video file in android.
I plan to manage this by using getFrameAtTime
and saving it to an image sequence. Then I can modify images in the sequence and code it back to a movie. But I have two problem with this:
- First, as I read it, the
getFrameAtTime
is for creating thumbnails and will not guarantee returning the correct frame. This makes the video laggy. - Secondly, saving the images and reading it back takes a long time.
I read that the proper way of doing the decode is with MediaExtractor, this is fine, but I only have examples to render it directly to a surfaceView
. Is there any way for me to convert the outputBuffer
to a bitmap?
I would need it to get it working with an api level of 16 and above.
There are many people saying that the method
onFrameAvailable()
is never called. Well, the listener should be in a different thread than the main thread. To set the listener do this: (where this is the class listener that implementsSurfaceTexture.IOnFrameAvailableListener
):You can find a collection of useful examples on the bigflake site.
In particular, the
ExtractMpegFramesTest
demonstrates how to decode a .mp4 file toBitmap
, and theDecodeEditEncodeTest
decodes and re-encodes an H.264 stream, modifying the frames with a GLES shader.Many of the examples use features introduced in API 18, such as
Surface
input toMediaCodec
(which avoids a number of color-format issues), andMediaMuxer
(which allows you to convert the raw H.264 elementary stream coming out ofMediaCodec
into a .mp4 file). Some devices will allow you to extract video to YUV data inByteBuffer
, modify it, and re-encode it, but other devices extract to proprietary YUV color formats that may be rejected by the API 16 version ofMediaCodec
.I'd recommend coding for API 18 (Android 4.3 "Jellybean" MR2) or later.