How can I draw on a video while recording it in an

2019-04-14 05:40发布

I am trying to develop an app that allows me to draw on a video while recording it, and to then save both the recording and the video in one mp4 file for later use. Also, I want to use the camera2 library, especially that I need my app to run for devices higher than API 21, and I am always avoiding deprecated libraries.

I tried many ways to do it, including FFmpeg in which I placed an overlay of the TextureView.getBitmap() (from the camera) and a bitmap taken from the canvas. It worked but since it is a slow function, the video couldn't catch enough frames (not even 25 fps), and it ran so fast. I want audio to be included as well.

I thought about the MediaProjection library, but I am not sure if it can capture the layout containg the camera and the drawing only inside its VirtualDisplay, because the app user may add text as well on the video, and I don't want the keyboard to appear.

Please help, it's been a week of research and I found nothing that worked fine for me.

P.S: I don't have a problem if a little bit of processing time is included after that the user presses the "Stop Recording"button.

EDITED:

Now after Eddy's Answer, I am using the shadercam app to draw on the camera surface since the app does the video rendering, and the workaround to do is about rendering my canvas into a bitmap then into a GL texture, however I am not being able to do it successfully. I need your help guys, I need to finish the app :S

I am using the shadercam library (https://github.com/googlecreativelab/shadercam), and I replaced the "ExampleRenderer" file with the following code:

public class WriteDrawRenderer extends CameraRenderer
{
    private float offsetR = 1f;
    private float offsetG = 1f;
    private float offsetB = 1f;

    private float touchX = 1000000000;
    private float touchY = 1000000000;

    private  Bitmap textBitmap;

    private int textureId;

    private boolean isFirstTime = true;

    //creates a new canvas that will draw into a bitmap instead of rendering into the screen
    private Canvas bitmapCanvas;

    /**
     * By not modifying anything, our default shaders will be used in the assets folder of shadercam.
     *
     * Base all shaders off those, since there are some default uniforms/textures that will
     * be passed every time for the camera coordinates and texture coordinates
     */
    public WriteDrawRenderer(Context context, SurfaceTexture previewSurface, int width, int height)
    {
        super(context, previewSurface, width, height, "touchcolor.frag.glsl", "touchcolor.vert.glsl");
        //other setup if need be done here


    }

    /**
     * we override {@link #setUniformsAndAttribs()} and make sure to call the super so we can add
     * our own uniforms to our shaders here. CameraRenderer handles the rest for us automatically
     */
    @Override
    protected void setUniformsAndAttribs()
    {
        super.setUniformsAndAttribs();

        int offsetRLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetR");
        int offsetGLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetG");
        int offsetBLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetB");

        GLES20.glUniform1f(offsetRLoc, offsetR);
        GLES20.glUniform1f(offsetGLoc, offsetG);
        GLES20.glUniform1f(offsetBLoc, offsetB);

        if (touchX < 1000000000 && touchY < 1000000000)
        {
            //creates a Paint object
            Paint yellowPaint = new Paint();
            //makes it yellow
            yellowPaint.setColor(Color.YELLOW);
            //sets the anti-aliasing for texts
            yellowPaint.setAntiAlias(true);
            yellowPaint.setTextSize(70);

            if (isFirstTime)
            {
                textBitmap = Bitmap.createBitmap(mSurfaceWidth, mSurfaceHeight, Bitmap.Config.ARGB_8888);
                bitmapCanvas = new Canvas(textBitmap);
            }

            bitmapCanvas.drawText("Test Text", touchX, touchY, yellowPaint);

            if (isFirstTime)
            {
                textureId = addTexture(textBitmap, "textBitmap");
                isFirstTime = false;
            }
            else
            {
                updateTexture(textureId, textBitmap);
            }

            touchX = 1000000000;
            touchY = 1000000000;
        }
    }

    /**
     * take touch points on that textureview and turn them into multipliers for the color channels
     * of our shader, simple, yet effective way to illustrate how easy it is to integrate app
     * interaction into our glsl shaders
     * @param rawX raw x on screen
     * @param rawY raw y on screen
     */
    public void setTouchPoint(float rawX, float rawY)
    {
        this.touchX = rawX;
        this.touchY = rawY;
    }
}

Please help guys, it's been a month and I am still stuck with the same app :( and have no idea about opengl. Two weeks and I'm trying to use this project for my app, and nothing is being rendered on the video.

Thanks in advance!

1条回答
家丑人穷心不美
2楼-- · 2019-04-14 06:09

Here's a rough outline that should work, but it's quite a bit of work:

  1. Set up a android.media.MediaRecorder for recording the video and audio
  2. Get a Surface from MediaRecorder and set up an EGLImage from it (https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface(android.opengl.EGLDisplay, android.opengl.EGLConfig, java.lang.Object, int[], int) ); you'll need a whole OpenGL context and setup for this. Then you'll need to set that EGLImage as your render target.
  3. Create a SurfaceTexture within that GL context.
  4. Configure camera to send data to that SurfaceTexture
  5. Start the MediaRecorder
  6. On each frame received from camera, convert the drawing done by the user to a GL texture, and composite the camera texture and the user drawing.
  7. Finally, call glSwapBuffers to send the composited frame to the video recorder
查看更多
登录 后发表回答