VNFaceObservation BoundingBox Not Scaling In Portr

2019-08-18 23:42发布

For reference, this stems from a question in the Vision API. I am working to use Vision to detect faces in an image via a VNDetectFaceRectanglesRequest, which is successfully functioning in terms of determining the correct number of faces in an image and providing the boundingBox for each face.

My trouble is that due to my UIImageView (which holds the UIImage in question) is using a .scaleAspectFit content mode, I am having immense difficulty in properly drawing the bounding box in portrait mode (things work great in landscape).

Here's my code;

func detectFaces(image: UIImage) {

    let detectFaceRequest = VNDetectFaceRectanglesRequest { (request, error) in
        if let results = request.results as? [VNFaceObservation] {
            for faceObservation in results {
                let boundingRect = faceObservation.boundingBox

                let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.mainImageView.frame.size.height)
                let translate = CGAffineTransform.identity.scaledBy(x: self.mainImageView.frame.size.width, y: self.mainImageView.frame.size.height)
                let facebounds = boundingRect.applying(translate).applying(transform)

                let mask = CAShapeLayer()
                var maskLayer = [CAShapeLayer]()
                mask.frame = facebounds

                mask.backgroundColor = UIColor.yellow.cgColor
                mask.cornerRadius = 10
                mask.opacity = 0.3
                mask.borderColor = UIColor.yellow.cgColor
                mask.borderWidth = 2.0

                maskLayer.append(mask)
                self.mainImageView.layer.insertSublayer(mask, at: 1)
            }
         }

    let vnImage = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])
    try? vnImage.perform([detectFaceRequest])

}


This is the end result of what I'm seeing, note that the boxes are correct in their X position, but largely inaccurate in their Y position when in portrait.

** Incorrect Placement In Portrait** Incorrect Placement In Portrait

** Correct Placement In Landscape** Correct Placement In Landscape

2条回答
Deceive 欺骗
2楼-- · 2019-08-19 00:01

I think you have to provide correct orientation to the CIImage, while sending to process and detect faces. As mentioned by @Pawel Chmiel in his blog post that:

What is important here is that we need to provide the right orientation, because face detection is really sensitive at this point, and rotated image may cause no results.

 let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)

 //leftMirrored for front camera
 let ciImageWithOrientation = ciImage.applyingOrientation(Int32(UIImageOrientation.leftMirrored.rawValue))

For the front camera, we have to use leftMirrored orientation

查看更多
可以哭但决不认输i
3楼-- · 2019-08-19 00:19

VNFaceObservation bounding box are normalised to processed image. From documentation.

The bounding box of detected object. The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.

So you can use a simple calculation to find the correct size/frame for detected face like below.

let boundingBox = observation.boundingBox
let size = CGSize(width: boundingBox.width * imageView.bounds.width,
                  height: boundingBox.height * imageView.bounds.height)
let origin = CGPoint(x: boundingBox.minX * imageView.bounds.width,
                     y: (1 - observation.boundingBox.minY) * imageView.bounds.height - size.height)

then you can form the CAShapeLayer rect like below

layer.frame = CGRect(origin: origin, size: size)

查看更多
登录 后发表回答