I need help with an application I am working on. The application has to have a custom Camera interface to record a video with audio and have to add some objects in realtime on the TextureView canvas. Old Camera API is deprecated, so I have to use Camera2 API to render the live preview on TextureView. My goal is to draw some objects on top of the TextureView Canvas, could be some text/jpg/gif while the camera stream renders in the background and being able to record the video with my overlay canvas content and camera feed.
Problem is I can draw custom content in an transparent overlay view but that is is just for user's viewing purposes. I have tried researching this for a few days but I am not able to get the right approach to solve my purpose.
I tried the following code after calling the openCamera() method, but then I just see a rectangle drawn but not the camera preview:
Canvas canvas = mTextureView.lockCanvas();
Paint myPaint = new Paint();
myPaint.setColor(Color.WHITE);
myPaint.setStrokeWidth(10);
canvas.drawRect(100, 100, 300, 300, myPaint);
mTextureView.unlockCanvasAndPost(canvas);
I also tried a custom TextureView class and override thevonDrawForeground(Canvas canvas) method but it doesn't work.
The onDraw() method in TextureView class is final and thus, I am not able to do anything at this point except for just streaming the camera feed.
/**
* Subclasses of TextureView cannot do their own rendering
* with the {@link Canvas} object.
*
* @param canvas The Canvas to which the View is rendered.
*/
@Override
protected final void onDraw(Canvas canvas) {
}
In short, I want user to be able to record video through my camera app with some props here and there.
Modifying a video in real time is a high processor and hence, high battery overhead operation - I am sure you know this but its worth saying that if you can add your modifications on the server side, maybe by sending the stream along with a timestamped set of text overlays to the server, you should have more horsepower server side.
The following code will add text and an image to a still picture or frame captured by Camera2 on Android. I have not used it with video so can't comment on speed and whether it is practical to do this with a real time video stream - it wasn't optimised for this but it should be a starting point for you:
Likely the most performant option is to pipe the camera feed straight into the GPU, draw on top of it there, and from there render to the display and a video encoder directly.
This is what many video chat apps do, for example, for any effects.
You can use a SurfaceTexture to connect camera2 to EGL, and then you can render the preview onto a quad, and then your additions on top.
Then you can render to a screen buffer (GLSurfaceView for example), and to a separate EGLImage from a MediaRecorder/MediaCodec Surface.
There's a lot of code involved there, and a lot of scaffolding for EGL setup, so it's hard to point to any simple examples.