Android: Dynamically Blur Surface with Video

2019-03-30 04:37发布

问题:

I am building an Android application where an ExoPlayer plays a video onto the surface of a SurfaceView, and I am investigating whether it is possible to dynamically blur the playing video.

Blurring techniques that involve first generating a bitmap of the view to blur will not work, since the surface part of a SurfaceView does not appear in bitmaps.

Surfaces and views used to have built-in blurring effects in older versions of Android (e.g. Surface.FX_SURFACE_BLUR), but seem to have been deprecated in newer APIs.

Can anyone share some insight on how a surface can be dynamically blurred? Thank you.

回答1:

There are lots of questions on StackOverflow with small bits and pieces of what needs to be done. I'll go over the method I used and hopefully it will be useful to somebody.

If this was a static blur of a video frame, it would be sufficient to play the video in a TextureView, use the .getBitmap() function and blur the resulting Bitmap using a tool such as Renderscript. However, .getBitmap() is performed on the main UI thread, and hence lags the video whose frames it is trying to copy.

To perform a blur for every frame, the best approach seems to be to use a GLSurfaceView with a custom renderer. I used the code available in VidEffects pointed to from this answer as a good starting point.

Blurs with large radii can be very computationally intensive. That is why I first approached performing the blur with two separate fragment shaders (one to blur horizontally, and one to blur the result vertically). I actually ended up using only one fragment shader to apply a 7x7 Gaussian kernel. A very important thing to keep in mind if your GLSurfaceView is large is to call setFixedSize() on the GLSurfaceView's SurfaceHolder to make its resolution lower than that of the screen. The result does not look very pixelated since it is blurred anyway, but the performance increase is very significant.

The blur I made managed to play 24fps on most devices, with setFixedSize() specifying its resolution to be 100x70.



回答2:

In case anyone wants a single pass fragment shader to complete the circle... The following code implements the ShaderInterface defined in the code available from VidEffects. I adapted it from this example on ShaderToy.com.

public class BlurEffect2 implements ShaderInterface {

    private final int mMaskSize;
    private final int mWidth;
    private final int mHeight;

    public BlurEffect2(int maskSize, int width, int height) {
        mMaskSize = maskSize;
        mWidth = width;
        mHeight = height;
    }

    @Override
    public String getShader(GLSurfaceView mGlSurfaceView) {

        float hStep = 1.0f / mWidth;
        float vStep = 1.0f / mHeight;

        return  "#extension GL_OES_EGL_image_external : require\n" +          
                "precision mediump float;\n" +
                //"in" attributes from our vertex shader
                "varying vec2 vTextureCoord;\n" +

                //declare uniforms
                "uniform samplerExternalOES sTexture;\n" +

                "float normpdf(in float x, in float sigma) {\n" +
                "    return 0.39894 * exp(-0.5 * x * x / (sigma * sigma)) / sigma;\n" +
                "}\n" +


                "void main() {\n" +
                "    vec3 c = texture2D(sTexture, vTextureCoord).rgb;\n" +

                //declare stuff
                "    const int mSize = " + mMaskSize + ";\n" +
                "    const int kSize = (mSize - 1) / 2;\n" +
                "    float kernel[ mSize];\n" +
                "    vec3 final_colour = vec3(0.0);\n" +

                //create the 1-D kernel
                "    float sigma = 7.0;\n" +
                "    float Z = 0.0;\n" +
                "    for (int j = 0; j <= kSize; ++j) {\n" +
                "        kernel[kSize + j] = kernel[kSize - j] = normpdf(float(j), sigma);\n" +
                "    }\n" +

                //get the normalization factor (as the gaussian has been clamped)
                "    for (int j = 0; j < mSize; ++j) {\n" +
                "        Z += kernel[j];\n" +
                "    }\n" +

                //read out the texels
                "    for (int i = -kSize; i <= kSize; ++i) {\n" +
                "        for (int j = -kSize; j <= kSize; ++j) {\n" +
                "            final_colour += kernel[kSize + j] * kernel[kSize + i] * texture2D(sTexture, (vTextureCoord.xy + vec2(float(i)*" + hStep + ", float(j)*" + vStep + "))).rgb;\n" +
                "        }\n" +
                "    }\n" +

                "    gl_FragColor = vec4(final_colour / (Z * Z), 1.0);\n" +
                "}";
    }

}

Just as Michael has pointed out above, you can increase the performance by setting the size of the SurfaceView using setFixedSize.

@BindView(R.id.video_snap)
VideoSurfaceView mVideoView;

@Override
public void showVideo(String cachedPath) {
    mImageView.setVisibility(View.GONE);
    mVideoView.setVisibility(View.VISIBLE);

    //Get width and height of the video
    final MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
    mRetriever.setDataSource(cachedPath);
    int width = Integer.parseInt(mRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH));
    int height = Integer.parseInt(mRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT));

    //divide the width and height by 10
    width /= 10;
    height /= 10;

    //set the size of the surface to play on to 1/10 the width and height
    mVideoView.getHolder().setFixedSize(width, height);

    //Set up the media player
    mMediaPlayer = new MediaPlayer();
    mMediaPlayer.setLooping(true);

    try {
        mMediaPlayer.setDataSource(cachedPath);
    } catch (Exception e) {
        Timber.e(e, e.getMessage());
    }

    //init and start the video player with the mask size set to 17
    mVideoView.init(mMediaPlayer, new BlurEffect2(17, width, height));
    mVideoView.onResume();
}