Using Camera2 API with ImageReader

2019-01-19 19:56发布

问题:

I'm trying to capture image data using the Camera2 API on a Galaxy S4. ImageReader is being used as the surface provider. The image format used has been tried with both ImageFormat.YV12 and ImageFormat.YUV_420_888 and produces the same results.

The setup seems fine, and I get an Image from the ImageReader using ImageReader. The Image has 3 planes. The buffers are the expected sizes, Width*Height for Y plane and (Width*Height)/4 for the other two planes.

The issue is that I'm not getting data properly in two ways. The first issue is that the Y plane data is in mirror-image. This can be dealt with, though it is strange so I am curious if this is expected.

The worse issue is that the other two planes don't seem to be delivering data correctly at all. For instance, with an image size of 640x480, which results in U and V buffer sizes of 76800 bytes, only the first 320 bytes of the buffers are non-zero values. This number varies and does not seem to follow a set ratio between different image sizes, but does seem to be consistent between images for each size.

I wonder if there is something that I am missing in using this API. Code is below.

public class OnboardCamera {
  private final String TAG = "OnboardCamera";

  int mWidth = 1280;
  int mHeight = 720;
  int mYSize = mWidth*mHeight;
  int mUVSize = mYSize/4;
  int mFrameSize = mYSize+(mUVSize*2); 

  //handler for the camera
  private HandlerThread mCameraHandlerThread;
  private Handler mCameraHandler;

  //the size of the ImageReader determines the output from the camera.
  private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YV12, 30);

  private Surface mCameraRecieverSurface = mImageReader.getSurface();
  {
      mImageReader.setOnImageAvailableListener(mImageAvailListener, mCameraHandler);
  }

  private byte[] tempYbuffer = new byte[mYSize];
  private byte[] tempUbuffer = new byte[mUVSize];
  private byte[] tempVbuffer = new byte[mUVSize];

  ImageReader.OnImageAvailableListener mImageAvailListener = new ImageReader.OnImageAvailableListener() {
      @Override
      public void onImageAvailable(ImageReader reader) {
          //when a buffer is available from the camera
          //get the image
          Image image = reader.acquireNextImage();
          Image.Plane[] planes = image.getPlanes();

          //copy it into a byte[]
          byte[] outFrame = new byte[mFrameSize];
          int outFrameNextIndex = 0;


          ByteBuffer sourceBuffer = planes[0].getBuffer();
          sourceBuffer.get(tempYbuffer, 0, tempYbuffer.length);

          ByteBuffer vByteBuf = planes[1].getBuffer();
          vByteBuf.get(tempVbuffer);

          ByteBuffer yByteBuf = planes[2].getBuffer();
          yByteBuf.get(tempUbuffer);

          //free the Image
          image.close();
      }
  };


  OnboardCamera() {
      mCameraHandlerThread = new HandlerThread("mCameraHandlerThread");
      mCameraHandlerThread.start();
      mCameraHandler = new Handler(mCameraHandlerThread.getLooper());

  }




  @Override
  public boolean startProducing() {
      CameraManager cm = (CameraManager) Ten8Application.getAppContext().getSystemService(Context.CAMERA_SERVICE);
      try {
          String[] cameraList = cm.getCameraIdList();
          for (String cd: cameraList) {
              //get camera characteristics
              CameraCharacteristics mCameraCharacteristics = cm.getCameraCharacteristics(cd);

              //check if the camera is in the back - if not, continue to next
              if (mCameraCharacteristics.get(CameraCharacteristics.LENS_FACING) != CameraCharacteristics.LENS_FACING_BACK) {
                  continue;
              }

              //get StreamConfigurationMap - supported image formats
              StreamConfigurationMap scm = mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);

              android.util.Size[] sizes =  scm.getOutputSizes(ImageFormat.YV12);

              cm.openCamera(cd, mDeviceStateCallback, mCameraHandler);
          }

      } catch (CameraAccessException e) {
          e.printStackTrace();
          Log.e(TAG, "CameraAccessException detected", e);
      }
      return false;
  }

  private final CameraDevice.StateCallback mDeviceStateCallback = new CameraDevice.StateCallback() {
      @Override
      public void onOpened(CameraDevice camera) {
          //make list of surfaces to give to camera
          List<Surface> surfaceList = new ArrayList<>();
          surfaceList.add(mCameraRecieverSurface);

          try {
              camera.createCaptureSession(surfaceList, mCaptureSessionStateCallback, mCameraHandler); 
          } catch (CameraAccessException e) {
              Log.e(TAG, "createCaptureSession threw CameraAccessException.", e);
          }
      }

      @Override
      public void onDisconnected(CameraDevice camera) {

      }

      @Override
      public void onError(CameraDevice camera, int error) {

      }
  };

  private final CameraCaptureSession.StateCallback mCaptureSessionStateCallback = new CameraCaptureSession.StateCallback() {
      @Override
      public void onConfigured(CameraCaptureSession session) {
          try {
              CaptureRequest.Builder requestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
              requestBuilder.addTarget(mCameraRecieverSurface);
              //set to null - image data will be produced but will not receive metadata
              session.setRepeatingRequest(requestBuilder.build(), null, mCameraHandler); 

          } catch (CameraAccessException e) {
              Log.e(TAG, "createCaptureSession threw CameraAccessException.", e);
          }


      }

      @Override
      public void onConfigureFailed(CameraCaptureSession session) {

      }
  };
}

回答1:

I had the same issue, the problem I believe was in Android API 21. I upgraded to API 23 and the same code worked fine. Also tested on API 22 and it also worked.



回答2:

Are you paying attention to the Image.Plane's row- and pixelStride parameters?

Due to hardware memory mapping constraints, the row stride is often larger than the width of the image, and the start of row y in the image is at position (yrowStride) instead of (ywidth) in the ByteArray for a given plane.

If that's the case, it's not surprising that after the first 320 bytes for a 640x480 image (1 row of subsampled chroma data), the U or V plane will be 0 for some time - there should be (rowStride - width) bytes of zeros or garbage, and then the next row of pixel data will start.

Note that if pixelStride is not 1, then you also have to skip bytes between pixel values; this is most often used when the underlying YCbCr buffer is actually semi-planar, not planar.