Android Camera preview on two views ( Multi-lenses

2019-02-02 21:09发布

问题:

I need two camera previews in my app. However, android camera can give only one preview at a time. Is there any way to pipeline/copy that preview to another view? I came across this question How to create multi lenses or preview using one camera in Android and he says that

On Android 3.0 or later, you can use the setPreviewTexture method to pipe the preview data into an OpenGL texture, which you can then render to multiple quads in a GLSurfaceView or equivalent.

But I have no idea how to render that to multiple quads in GLSurfaceView.I need to support android 4.0+. But I don't want to use preview frame from preview callback. It causes significant delay. Any help would be appreciated. Thanks!!

Here is my code for single preview

activity_main.xml

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@android:color/black"
    tools:context=".MainActivity" >

    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:orientation="horizontal"
        android:weightSum="10" >

        <TextureView
            android:id="@+id/textureView1"
            android:layout_width="0dp"
            android:layout_height="match_parent"
            android:layout_weight="5" />

        <TextureView
            android:id="@+id/textureView2"
            android:layout_width="0dp"
            android:layout_height="match_parent"
            android:layout_weight="5" />
    </LinearLayout>

</RelativeLayout>

MainActivity.java

public class MainActivity extends Activity implements SurfaceTextureListener{

    private Camera mCamera;
    private TextureView mTextureView1;
    private TextureView mTextureView2;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        mTextureView1 = (TextureView) findViewById(R.id.textureView1);
        mTextureView2 = (TextureView) findViewById(R.id.textureView2);

        mTextureView1.setSurfaceTextureListener(this);
    }



    @Override
    public void onSurfaceTextureAvailable(SurfaceTexture surface, int width,
            int height) {


        try {
            mCamera = Camera.open(getCameraId());
            mCamera.setPreviewTexture(surface);
            CameraInfo cameraInfo = new CameraInfo();
            Camera.getCameraInfo(getCameraId(), cameraInfo);
            setCameraDisplayOrientation(this, getCameraId(), mCamera);
            mCamera.startPreview();

        } catch (Exception e) {
            e.printStackTrace();
        }

    }

    @Override
    public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
        try {
            mCamera.stopPreview();
            mCamera.release();
        } catch (Exception e) {
            e.printStackTrace();
        }
        return true;
    }

    @Override
    public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width,
            int height) {


    }

    @Override
    public void onSurfaceTextureUpdated(SurfaceTexture surface) {

    }


}

Output:

回答1:

Start by sending the camera preview to a SurfaceTexture instead of a Surface associated with a View. That takes the output of the camera and makes it available for use as a GLES texture.

Then, instead of having two TextureViews, just render two textured quads with GLES, each of which occupies half of a single View. This is easier than rendering to two different surfaces (only one EGL context to worry about). If you haven't worked with OpenGL ES before there can be a bit of a learning curve.

The pieces you need can be found in Grafika. Consider for example the "Continuous capture" and "Show + capture camera" activities. Both direct the camera output to a SurfaceTexture and render it twice. For these, it's once to the screen and once to the input of a video encoder, but it's the same idea. If you look at "Hardware scaler exerciser" you can see it blitting a textured quad that bounces around the screen; you can use this as an example of how to set the size and position of the quad.

There's also "Double decode", which is using a pair of TextureViews to show two decoded movies side-by-side. You don't want to do what it does -- it's receiving content from two different sources, not showing a single source twice.

The various activities use GLES with TextureView, SurfaceView, and GLSurfaceView. Each view type comes with unique benefits and limitations.

Update: The new(er than this answer) "texture from camera" activity is probably the closest to what you want. It sends the camera preview to a SurfaceTexture and demonstrates how to move, resize, rotate, and zoom the image by rendering it with GLES.