I am referring to this excellent example of how to encode the preview frames of the camera directly into an mp4 file: http://bigflake.com/mediacodec/CameraToMpegTest.java.txt
I have adopted the code in the way that I also would like to render the preview image on the screen. Therefore I got something like a GLTextureView with its own EGLContext. This Context is then used as shared EGLContext when I create the EGLContext for the encoder rendering:
mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], sharedContext == null ? EGL14.EGL_NO_CONTEXT : sharedContext,
attrib_list, 0);
In my render-loop I followed the tip of fadden... for every frame I do the following:
- first I wait for the new image to arrive on the SurfaceTexture with awaitNewImage()
- then I set the GLTextureView's context current and render the frame on it
- after that I set the encoders context current and render the frame on it
This looks something like that:
mFrameWatcher.awaitNewImage();
mSurfaceTexture.updateTexImage();
_textureView.getEGLManager().makeCurrent();
_textureView.requestRender();
mInputSurface.makeCurrent();
mInputSurface.requestRender();
This worked well while I tested it only on my Nexus 4 with Android 4.3.
However, since I got the new Nexus 5 with Android 4.4 the encoder only gets 2 different frames per second from the SurfaceTexture... but these 2 frames are repeatedly drawn... so he encodes 15 times the same frame. ALTHOUGH the frames are rendered correct to my GLTextureView with 30 different frames per second. I first thought this might be a Nexus 5 problem - so I updated another Nexus 4 to Android 4.4... but it is the same on the Nexus 4 now.
I played around a bit - and finally I was able to solve the problem, by detaching and re-attaching the SurfaceTexture to the different context's when I switch them. This looks something like this:
mFrameWatcher.awaitNewImage();
mSurfaceTexture.updateTexImage();
_textureView.getEGLManager().makeCurrent();
_textureView.requestRender();
mSurfaceTexture.detachFromGLContext();
mInputSurface.makeCurrent();
mSurfaceTexture.attachToGLContext(_textureViewRenderer.getTextureId());
mInputSurface.requestRender();
mSurfaceTexture.detachFromGLContext();
_textureView.getEGLManager().makeCurrent();
mSurfaceTexture.attachToGLContext(_textureViewRenderer.getTextureId());
My question now is: Is this the correct way to do this? Honestly I thought the re-attaching of the SurfaceTexture should not be necessary when I use shared contexts. Also the re-attaching takes quite a long time... 3-6 ms for every frame with peeks on 12 ms, which could be better used for rendering. Am I doing/understanding somethind wrong here? Why did it work like a charm on the Nexus 4 with 4.3 without the need to re-attach the SurfaceTexture?
It appears this is in fact the same problem as this question. I put some details there; in short, you should be able to fix it by un-binding and re-binding the texture, which is essentially what you're doing with the awkward attach/detach sequence.
In my code, I was able to fix it by changing this:
to this:
in my texture renderer. I'll update the bigflake examples in a bit.