In simple words, all I need to do is display a live stream of video frames in Android (each frame is YUV420 format). I have a callback function where I receieve individual frames as a byte array. Something that looks like this :
public void onFrameReceived(byte[] frame, int height, int width, int format) {
// display this frame to surfaceview/textureview.
}
A feasible but slow option is to convert the byte array to a Bitmap and draw to canvas on SurfaceView. In the future, I would ideally like to be able to alter brightness, contrast etc of this frame, and hence am hoping I can use OpenGL-ES for the same. What are my other options to do this efficiently?
Remember, unlike in implementations of Camera
or MediaPlayer
class, I can't direct my output to a surfaceview/textureview using camera.setPreviewTexture(surfaceTexture);
as I am receiving individual frames using Gstreamer in C.
I'm using ffmpeg for my project, but the principal for rendering the YUV frame should be the same for yourself.
If a frame, for example, is 756 x 576, then the Y frame will be that size. The U and V frame are half the width and height of the Y frame, so you will have to make sure you account for the size differences.
I don't know about the camera API, but the frames I get from a DVB source have a width and also each line has a stride. Extras pixels at the end of each line in the frame. Just in case yours is the same, then account for this when calculating your texture coordinates.
Adjusting the texture coordinates to account for the width and stride (linesize):
The vertex shader I've used takes screen coordinates from 0.0 to 1.0, but you can change these to suit. It also takes in the texture coords and a colour input. I've used the colour input so that I can add fading, etc.
Vertex shader:
The fragment shader which takes three uniform textures, one for each Y, U and V framges and converts to RGB. This also multiplies by the colour passed in from the vertex shader:
The vertices used are in:
Hope this helps!
EDIT:
For NV12 format you can still use a fragment shader, although I've not tried it myself. It takes in the interleaved UV as a luminance-alpha channel or similar.
See here for how one person has answered this: https://stackoverflow.com/a/22456885/2979092
I took several answers from SO and various articles plus @WLGfx's answer above to come up with this:
I created two
byte buffers
, one for Y and one for the UV part of the texture. Then converted the byte buffers to textures usingBoth these textures are then sent as normal 2D textures to the glsl shader: