Image data from Android camera2 API flipped & squi

2019-02-09 10:49发布

问题:

I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).

I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).

So I've set up my capture requests as:

mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);

and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:

Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();

...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to

mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();

...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:

  • NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
  • JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
  • YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).

What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?

Any hints appreciated.

回答1:

I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).

At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.

I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...

I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.



回答2:

I had the same problem and found a solution. The first part of the problem is setting the size of the surface buffer:

    // We configure the size of default buffer to be the size of camera preview we want.
    //texture.setDefaultBufferSize(width, height);

This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.

            int[] rgba = new int[width*height];
            //getImage(rgba);
            nativeLoader.convertImage(width, height, data, rgba);

            Bitmap bmp = mBitmap;
            bmp.setPixels(rgba, 0, width, 0, 0, width, height);

            Canvas canvas = mTextureView.lockCanvas();

            if (canvas != null) {
                //canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height),  null);
                //canvas.drawBitmap(bmp, configureTransform(width, height),  null);
                canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );

                //canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);

                mTextureView.unlockCanvasAndPost(canvas);
            }

            image.close();

You can play around with the values to fine tune the solution for your problem.