I have an application which takes a camera preview, performs some basic image processing function on every frame (e.g. edge detection, colour change, image warp etc.) and displays the modified frame to the screen in "real time". Similar to the "Paper Camera" app in Android Market.
A summary of my approach:
1: Create two overlapping Views in a framelayout:
A SurfaceView to pass to Camera.setPreviewDisplay(). (Passing null would prevent the camera preview starting on some devices - opencv used to do this before Android 4.0?).
A class called "LiveView" which extends View and implements Camera.PreviewCallBack. This view receives frames from the camera, and displays the frame after modification (e.g. edge detection). This View is on top of the SurfaceView.
2: I call Camera.setPreviewCallbackWithBuffer(), so that frames are sent to my LiveView
3: In the onPreviewFrame() of the LiveView, I take the captured frame (byte[]), convert from YUV to RGB and perform image processing functions, and call postInvalidate() (The YUV2RGB conversion and image processing are done in native code)
4: In the OnDraw() method of LiveView, I create a bitmap from the modified RGB frame (byte[]), and draw the bitmap to canvas.
This works (between 5fps and 10fps on various devices), but I would like to hear how others have approached this problem, and how it could be improved. In particular:
- Would I gain any performance by extending GLSurfaceView rather than View to create the LiveView Class?
- It sounds very inefficient to have two surfaces being updated for every frame. Is there an alternative?
- To do this more efficiently, should I be accessing the camera at native level? - I believe OpenCV takes this approach?
Many Thanks