Render Bitmap frames to Surface for encoding

2019-01-20 03:29发布

问题:

My goal is to take in a M4V video file, decode a segment of the video as PNG frames, modify these frames, and re-encode the trimmed video (also to M4V).

The workflow is like so: [Input Video] -> Export Frames -> Modify Frames -> Encode Frames -> [Output Video].

For the decode process, I have been referencing the bigflake examples. Using the ExtractMpegFramesTest example code I was able to generate Bitmap frames from an .m4v file and export frames as PNG files.

Now I am attempting the re-encoding process, using the EncodeAndMuxTest example in attempts to create another set of classes for encoding.

The issue I am running into is, the example code seems to generate raw frames in OpenGL. I have a series of Bitmaps that I want to encode/render to the CodecInputSurface object. Pretty much the reverse of what the decoding process does.

The majority of the example code is just fine, it seems I just need to modify generateSurfaceFrame() to render the Bitmap to the Surface with OpenGL.

Here is the code that I have thus far:

// Member variables (see EncodeAndMuxTest example)
private MediaCodec encoder;
private CodeInputSurface inputSurface;
private MediaMuxer muxer;
private int trackIndex;
private boolean hasMuxerStarted;
private MediaCodec.BufferInfo bufferInfo;

// This is called for each frame to be rendered into the video file
private void encodeFrame(Bitmap bitmap)
{
    int textureId = 0;

    try
    {
        textureId = loadTexture(bitmap);

        // render the texture here?
    }
    finally
    {
        unloadTexture(textureId);
    }
}

// Loads a texture into OpenGL
private int loadTexture(Bitmap bitmap)
{
    final int[] textures = new int[1];
    GLES20.glGenTextures(1, textures, 0);

    int textureWidth = bitmap.getWidth();
    int textureHeight = bitmap.getHeight();

    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
    GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
            GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
            GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
            GLES20.GL_CLAMP_TO_EDGE);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
            GLES20.GL_CLAMP_TO_EDGE);

    return textures[0];
}

// Unloads a texture from OpenGL
private void unloadTexture(int textureId)
{
    final int[] textures = new int[1];
    textures[0] = textureId;

    GLES20.glDeleteTextures(1, textures, 0);
}

I feel like I should be able to use the STextureRender from the ExtractMpegFramesTest example to achieve similar, but it's just not clicking for me.

Another thing is performance, which I really am trying to get efficient encoding. I will be encoding 90-450 frames of video (3-15 seconds @ 30fps), so this should only take several seconds hopefully.

回答1:

You can try Intel INDE Media Pack, it allows to modify frames, cut segments, join files and much more. The are several sample effects for frames modifications: colors modifications, text overlays an so on, and you can easily modify or add new effects. It has a nice samples set and tutorials how to build and run app: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-running-samples

Frames modifications are gl shaders based, like this, for example for Sepia:

@Override
protected String getFragmentShader() {
    return "#extension GL_OES_EGL_image_external : require\n" +
            "precision mediump float;\n" +
            "varying vec2 vTextureCoord;\n" +
            "uniform mat3 uWeightsMatrix;\n" +
            "uniform samplerExternalOES sTexture;\n" +
            "void main() {\n" +
            "  vec4 color = texture2D(sTexture, vTextureCoord);\n" +
            "  vec3 color_new = min(uWeightsMatrix * color.rgb, 1.0);\n" +
            "  gl_FragColor = vec4(color_new.rgb, color.a);\n" +
            "}\n";

}

where uWeightsMatrix is set to shader via getAttributeLocation and glUniformMatrix3fv

protected float[] getWeights() {
    return new float[]{
            805.0f / 2048.0f, 715.0f / 2048.0f, 557.0f / 2048.0f,
            1575.0f / 2048.0f, 1405.0f / 2048.0f, 1097.0f / 2048.0f,
            387.0f / 2048.0f, 344.0f / 2048.0f, 268.0f / 2048.0f
    };
}


回答2:

I was able to render bitmap frames to a surface for encoding. I used MediaCodec + MediaMuxer to encode bitmap frames using the InputSurface and rendering the bitmaps using OpenGL.

Looks like all you are missing is open gl commands to render the texture

To fix this issue I also added some additional open gl commands to render the texture using a vertex shader.

See this project and util class for more details on rendering a texture using open gl https://github.com/rsri/Pic2Fro/blob/b4fe69b44343dab2515c3fd6e769f3370bf31312/app/src/main/java/com/pic2fro/pic2fro/util/Util.java

Calling the renderTexture(..) with the texture handle and appropriate width and height after GLUtils.texImage2D(..) in your snippet will fix all bitmap rendering issues.

See also my related answer here https://stackoverflow.com/a/49331295/7602598



回答3:

Note that original STextureRender renders external texture from SurfaceTexture. If you would like it to render your texture (created from bitmap) you have to make the following changes:

  1. change target parameters GLES11Ext.GL_TEXTURE_EXTERNAL_OES to GLES20.GL_TEXTURE_2D

  2. change this line in shader definition "uniform samplerExternalOES sTexture;\n" to “uniform sampler2D sTexture;\n”

  3. remove st.getTransformMatrix(mSTMatrix); from drawFrame

This solution worked for me.