For reference, this stems from a question in the Vision API. I am working to use Vision to detect faces in an image via a VNDetectFaceRectanglesRequest
, which is successfully functioning in terms of determining the correct number of faces in an image and providing the boundingBox
for each face.
My trouble is that due to my UIImageView
(which holds the UIImage
in question) is using a .scaleAspectFit
content mode, I am having immense difficulty in properly drawing the bounding box in portrait mode (things work great in landscape).
Here's my code;
func detectFaces(image: UIImage) {
let detectFaceRequest = VNDetectFaceRectanglesRequest { (request, error) in
if let results = request.results as? [VNFaceObservation] {
for faceObservation in results {
let boundingRect = faceObservation.boundingBox
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.mainImageView.frame.size.height)
let translate = CGAffineTransform.identity.scaledBy(x: self.mainImageView.frame.size.width, y: self.mainImageView.frame.size.height)
let facebounds = boundingRect.applying(translate).applying(transform)
let mask = CAShapeLayer()
var maskLayer = [CAShapeLayer]()
mask.frame = facebounds
mask.backgroundColor = UIColor.yellow.cgColor
mask.cornerRadius = 10
mask.opacity = 0.3
mask.borderColor = UIColor.yellow.cgColor
mask.borderWidth = 2.0
maskLayer.append(mask)
self.mainImageView.layer.insertSublayer(mask, at: 1)
}
}
let vnImage = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])
try? vnImage.perform([detectFaceRequest])
}
This is the end result of what I'm seeing, note that the boxes are correct in their X position, but largely inaccurate in their Y position when in portrait.
** Incorrect Placement In Portrait**
** Correct Placement In Landscape**