Simultaneous camera preview and processing

2020-04-02 18:42发布

I'm designing an application that has a OpenGL processing pipeline (collection of shaders) and simultaneously requires the end user to see the unprocessed camera preview.

For the sake of example, suppose you want to show the user the camera preview and at the same time count the number of red objects in the scenes you receive from the camera, but any shaders you utilize to count the objects such as hue filtering, etc. should not be seen by the user.

How would I go about setting this up properly?

I know I can setup a camera preview and then on the callback receive camera frame data in YUV format, then dump that into an OpenGL texture and process the frame that way, however, that has performance problems associated with it. I have to roundtrip the data from the camera hardware to the VM, then pass it back to the GPU memory. I'm using SurfaceTexture to get the data from the camera directly in OpenGL understandable format and pass that to my shaders to solve this issue.

I thought I'd be able to show that same unprocessed SurfaceTexture to the end user, but TextureView does not have a constructor or a setter where I can pass it the SurfaceTexture I want it to render. It always creates its own.

This is an overview of my current setup:

  • GLRenderThread: this class extends from Thread, setups the OpenGL context, display, etc. and uses a SurfaceTexture as the surface (3rd parameter of eglCreateWindowSurface).
  • GLFilterChain: A collection of shaders that perform detection on the input texture.
  • Camera: Uses a separate SurfaceTexture which is used as the input of GLFilterChain and grabs the camera's preview
  • Finally a TextureView that displays the GLRenderThread's SurfaceTexture

Obviously, with this setup, I'm showing the processed frames to the user which is not what I want. Further, the processing of the frames is not real-time. Basically, I run the input from Camera through the chain once and once all filters are done, I call updateTexImage to grab the next frame from the Camera. My processing is around 10 frames per second on Nexus 4.

I feel that I probably need to use 2 GL contexts, one for real-time preview and one for processing, but I'm not certain. I'm hoping someone can push me in the right direction.

2条回答
欢心
2楼-- · 2020-04-02 18:51

Unless your processing runs slower than real time, then the answer is a simple one: just keep the original camera texture untouched, calculate the processed image to a different texture and display both to the user, side by side in a single GLView. Keep a single thread, as all the processing happens on the GPU anyway. Multiple threads only complicate matters here.

The number of processing steps does not really matter, as there can be arbitrary number of intermediate textures (also see ping-ponging) that are never displayed to the user - no one and nothing is forcing you to.

The notion of real time is probably confusing here. Just think of a frame as an undivisible time snapshot. By doing so, you will ignore the delay that it takes for the image to go from the camera to the screen, but if you can keep it at interactive frame rates (such as at least 20 frames per second), then this can mostly be ignored.

On the other hand, if your processing is much slower, you need to make a choice between introducing a delay in the camera feed and process only every Nth frame, or alternately display every camera frame in real time and let the next processed frame lag behind. To do that, you would probably need two separate rendering contexts to enable asynchronous processing, which might be potentially hard to do on Android (or maybe just as simple as creating a second GLView, since you can live without data sharing between the contexts).

查看更多
戒情不戒烟
3楼-- · 2020-04-02 18:59

can you please upload some of the code you are using?

you might be able to call glDrawArrays on a texture created for and bound to the surface view you are using to display the preview initially, and then flush it and bind a separate texture with your other texture to do the analysis with? something like

GLES20.glUseProgram(simpleProgram);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[0]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);

GLES20.glUseProgram(crazyProgram);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[1]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);

where your camera's preview surfacetexture is bound to textures[0] and then a separate surfacetexture created for texture[1]

maybe?

查看更多
登录 后发表回答