How to get the current frame (as a Bitmap) for and

2020-08-26 03:29发布

问题:

I have the standard com.google.android.gms.vision.Tracker example successfully running on my android device and now i need to postprocess the image to find the iris of the current face which has been notified in the event methods of the Tracker.

So, how do i get the Bitmap frame which matches exactly the com.google.android.gms.vision.face.Face i received in the Tracker events? This also means that the final bitmap should match the webcam resolution and not the screen resolution.

One bad alternative solution is to call takePicture every few ms on my CameraSource and process this picture separately using the FaceDetector. Although this works i have the problem that the video stream freezes during takepicture and i get lots of GC_FOR_ALLOC messages cause of the single bmp facedetector memory waste.

回答1:

You have to create your own version of Face tracker which will extend google.vision face detector. In your mainActivity or FaceTrackerActivity(in google tracking sample) class create your version of FaceDetector class as following:

class MyFaceDetector extends Detector<Face> {
    private Detector<Face> mDelegate;

    MyFaceDetector(Detector<Face> delegate) {
        mDelegate = delegate;
    }

    public SparseArray<Face> detect(Frame frame) {
        YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
        byte[] jpegArray = byteArrayOutputStream.toByteArray();
        Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        //TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time
        //So you can process this TempBitmap in your own purposes adding extra code here

        return mDelegate.detect(frame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}

Then you have to join your own FaceDetector with CameraSource by modifying your CreateCameraSource method as follows:

private void createCameraSource() {

    Context context = getApplicationContext();

    // You can use your own settings for your detector
    FaceDetector detector = new FaceDetector.Builder(context)
            .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
            .setProminentFaceOnly(true)
            .build();

    // This is how you merge myFaceDetector and google.vision detector
    MyFaceDetector myFaceDetector = new MyFaceDetector(detector);

    // You can use your own processor
    myFaceDetector.setProcessor(
            new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
                    .build());

    if (!myFaceDetector.isOperational()) {
        Log.w(TAG, "Face detector dependencies are not yet available.");
    }

    // You can use your own settings for CameraSource
    mCameraSource = new CameraSource.Builder(context, myFaceDetector)
            .setRequestedPreviewSize(640, 480)
            .setFacing(CameraSource.CAMERA_FACING_FRONT)
            .setRequestedFps(30.0f)
            .build();
}