Media Codec and Rendering using GLSurfaceview, Opt

2019-06-13 17:36发布

问题:

I am using media codec for encoding frame coming from camera and render it using gl surface view.

my ondrawframe looks like this

public void onDrawFrame(GL10 unused)
{
        float[] mtx = new float[16];
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
        surface.updateTexImage();
        surface.getTransformMatrix(mtx); 

        mDirectVideo.draw(surface);
        saveRenderState();

        delegate.mInputSurface.makeCurrent();
        mDirectVideo.draw(surface);
        delegate.swapBuffers();
        restoreRenderState();

}

So here we are doing Draw(surface) two times which will render to surface. This will make overhead in system. Is there any where i can do Ondraw once only ? two times using shadder is costly operation Is there any way we can share the surfaces between render and encoder?

回答1:

If your draw() function is expensive -- you're rendering a complex scene in addition to blitting the video frame -- you can render to a texture using an FBO, and then just blit that texture twice. If your draw() is primarily just the video texture blit, then you can't make it any faster.

The bottom line is that you're rendering to two different surfaces, and there's currently (Android 4.4) no way to send the same buffer to two different consumers. The hardware on recent devices should have no trouble keeping up.

(Rendering the screen and encoded video is somewhat limiting anyway, unless you're recording the screen and so want the display size and video size to be exactly the same. It's usually convenient to have the on-screen display fit into the UI, while the encoded video matches what's coming out of the camera.)

BTW, watch out for this issue.

Update: Grafika now includes an example of drawing + recording using both methods (draw twice, draw to FBO and blit). See RecordFBOActivity.