MLKit Firebase android - How to convert FirebaseVi

2019-06-16 06:51发布

问题:

I have integrated MLkit FaceDetection into my android application. I have referred below URL

https://firebase.google.com/docs/ml-kit/android/detect-faces

Code for Face Detection Processor Class is

import java.io.IOException;
import java.util.List;

/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {

  private static final String TAG = "FaceDetectionProcessor";

  private final FirebaseVisionFaceDetector detector;

  public FaceDetectionProcessor() {

    FirebaseVisionFaceDetectorOptions options =
        new FirebaseVisionFaceDetectorOptions.Builder()
            .setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
            .setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
            .setTrackingEnabled(true)
            .build();

    detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
  }

  @Override
  public void stop() {
    try {
      detector.close();
    } catch (IOException e) {
      Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
    }
  }

  @Override
  protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
    return detector.detectInImage(image);
  }

  @Override
  protected void onSuccess(
      @NonNull List<FirebaseVisionFace> faces,
      @NonNull FrameMetadata frameMetadata,
      @NonNull GraphicOverlay graphicOverlay) {
      graphicOverlay.clear();

    for (int i = 0; i < faces.size(); ++i) {
      FirebaseVisionFace face = faces.get(i);
      FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
      graphicOverlay.add(faceGraphic);
      faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
    }




  }

  @Override
  protected void onFailure(@NonNull Exception e) {
    Log.e(TAG, "Face detection failed " + e);
  }
}

Here in "onSuccess" listener , we will get array of "FirebaseVisionFace" class objects which will have "Bounding Box" of face.

@Override
      protected void onSuccess(
          @NonNull List<FirebaseVisionFace> faces,
          @NonNull FrameMetadata frameMetadata,
          @NonNull GraphicOverlay graphicOverlay) {
          graphicOverlay.clear();

        for (int i = 0; i < faces.size(); ++i) {
          FirebaseVisionFace face = faces.get(i);
          FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
          graphicOverlay.add(faceGraphic);
          faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
        }
      }

I want to know How to convert this FirebaseVisionFace objects into Bitmap. I want to extract face image and show it in ImageView. Can anyone please help me . Thanks in advance.

Note : I have downloaded the sample Source code of MLKit android from below URL

https://github.com/firebase/quickstart-android/tree/master/mlkit

回答1:

You created the FirebaseVisionImage from a bitmap. After detection returns, each FirebaseVisionFace describes a bounding box as a Rect that you can use to extract the detected face from the original bitmap, e.g. using Bitmap.createBitmap().



回答2:

This may help you if you're trying to use ML Kit to detect faces and OpenCV to perform image processing on the detected face. Note in this particular example, you need the originalcamera bitmap inside onSuccess.

I haven't found a way to do this without a bitmap and truthfully still searching.

@Override
protected void onSuccess(@NonNull List<FirebaseVisionFace> faces, @NonNull FrameMetadata frameMetadata, @NonNull GraphicOverlay graphicOverlay) {
  graphicOverlay.clear();

  for (int i = 0; i < faces.size(); ++i) {
    FirebaseVisionFace face = faces.get(i);

    /* Original implementation has original image. Original Image represents the camera preview from the live camera */

    // Create Mat representing the live camera itself
    Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);

    // The box with a Imgproc affect made by OpenCV
    Mat rgbaInnerWindow;
    Mat mIntermediateMat = new Mat();

    // Make box for Imgproc the size of the detected face
    int rows = (int) face.getBoundingBox().height();
    int cols = (int) face.getBoundingBox().width();

    int left = cols / 8;
    int top = rows / 8;

    int width = cols * 3 / 4;
    int height = rows * 3 / 4;

    // Create a new bitmap based on live preview
    // which will show the actual image processing
    Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);

    // Bit map to Mat
    Utils.bitmapToMat(newBitmap, rgba);

    // Imgproc stuff. In this examply I'm doing edge detection.
    rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
    Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
    Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
    rgbaInnerWindow.release();

    // After processing image, back to bitmap
    Utils.matToBitmap(rgba, newBitmap);

    // Load the bitmap
    CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
    graphicOverlay.add(imageGraphic);

    FaceGraphic faceGraphic;
    faceGraphic = new FaceGraphic(graphicOverlay, face, null);
    graphicOverlay.add(faceGraphic);


    FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
    graphicOverlay.add(faceGraphic);


    // I can't speak for this
    faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
  }

}


回答3:

Since the accepted answer was not specific enough I will try to explain what I did.

1.- Create an ImageView on LivePreviewActivity like this:

private ImageView imageViewTest;

2.- Create it on the Activity xml and link it to java file. I placed it right before the the sample code had, so it can be visible on top of the camera feed.

3.-When they create a FaceDetectionProcessor pass an instance of the imageView to be able to set the source image inside the object.

FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);

4.-Change the constructor of FaceDetectionProcessor to be able to receive ImageView as a parameter and create a global variable that saves that instance.

public FaceDetectionProcessor(ImageView imageView) {
    FirebaseVisionFaceDetectorOptions options =
            new FirebaseVisionFaceDetectorOptions.Builder()
                    .setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
                    .setTrackingEnabled(true)
                    .build();

    detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
    this.imageView  = imageView;
}

5.- I created a crop method that takes a bitmap and a Rect to focus only on the face. So go ahead and do the same.

    public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
    int w = rect.right - rect.left;
    int h = rect.bottom - rect.top;
    Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
    Canvas canvas = new Canvas(ret);
    canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
    return ret;
}

6.- Modify detectInImage method to save an instance of the bitmap being detected and save it in a global variable.

    @Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
    imageBitmap = image.getBitmapForDebugging();
    return detector.detectInImage(image);
}

7.- Finally, modify OnSuccess method by calling the cropping method and assign result to the imageView.

    @Override
protected void onSuccess(
        @NonNull List<FirebaseVisionFace> faces,
        @NonNull FrameMetadata frameMetadata,
        @NonNull GraphicOverlay graphicOverlay) {
    graphicOverlay.clear();
    for (int i = 0; i < faces.size(); ++i) {
        FirebaseVisionFace face = faces.get(i);
        FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
        graphicOverlay.add(faceGraphic);
        faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
        croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
    }
    imageView.setImageBitmap(croppedImage);
}


回答4:

Actually you can just read the ByteBuffer then you can get the array for write to object files you want with OutputStream. Of course you can get it from getBoundingBox() too.