Since Android api level 1, We can attach a MediaPlayer or Camera to the Surface with setDisplay or setPrewviewDisplay, then image data can be transfered to gpu and processed much faster.
After SurfaceTexture is introduced, We can create our own texture with the target GL_TEXTURE_EXTERNAL_OES and attach the MediaPlayer or Camera to opengl es.
These are well known, but what I want to talk about is the underneath which is about Android graphics architecture.(Android Graphics architecture)
The data produced is on the CPU side, so it must be transferred to GPU in a very fast way.
Why does every Android device transfer the data so fast and how to make it underneath?
Or is this a hardware issue which has nothing to do with Android?
The data is not produced on the CPU side. The camera and hardware video codecs store their data in buffers allocated by the kernel gralloc mechanism (referenced from native code through the non-public GraphicBuffer). Surfaces communicate through BufferQueue objects, which pass the frames around by handle, without copying the data itself.
It's up to the OEM to ensure that the camera, video codecs, and GPU can use common formats. The YUV output by the video codec must be something that the GLES implementation can handle as an external texture. This is also why
EGL_RECORDABLE_ANDROID
is needed when sending GLES rendering to a MediaCodec... need to let the EGL implementation know that the frame it's rendering must be recognizable by the video codec.