How do I use the metadataOutputRectOfInterestForRe

2019-01-16 16:38发布

问题:

I am building a QR code scanner with Swift and everything works in that regard. The issue I have is that I am trying to make only a small area of the entire visible AVCaptureVideoPreviewLayer be able to scan QR codes. I have found out that in order to specify what area of the screen will be able to read/capture QR codes I would have to use a property of AVCaptureMetadataOutput called rectOfInterest. The trouble is when I assigned that to a CGRect, I couldn't scan anything. After doing more research online I have found some suggesting that I would need to use a method called metadataOutputRectOfInterestForRect to convert a CGRect into a correct format that the property rectOfInterest can actually use. HOWEVER, the big issue I have run into now is that when I use this method metadataoutputRectOfInterestForRect I am getting an error that states CGAffineTransformInvert: singular matrix. Can anyone tell me why I am getting this error? I believe I am using this method properly according to the Apple developer documentation and I believe I need to use this according to all the information I have found online to accomplish my goal. I will include links to the documentation I have found so far as well as a code sample of the function I am using to scan QR codes

CODE SAMPLE

func startScan() {
        // Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
        // as the media type parameter.
        let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

        // Get an instance of the AVCaptureDeviceInput class using the previous device object.
        var error:NSError?
        let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)

        if (error != nil) {
            // If any error occurs, simply log the description of it and don't continue any more.
            println("\(error?.localizedDescription)")
            return
        }

        // Initialize the captureSession object.
        captureSession = AVCaptureSession()
        // Set the input device on the capture session.
        captureSession?.addInput(input as! AVCaptureInput)

        // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
        let captureMetadataOutput = AVCaptureMetadataOutput()
        captureSession?.addOutput(captureMetadataOutput)

        // calculate a centered square rectangle with red border
        let size = 300
        let screenWidth = self.view.frame.size.width
        let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
        let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)

        // create UIView that will server as a red square to indicate where to place QRCode for scanning
        scanAreaView = UIView()
        scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
        scanAreaView?.layer.borderWidth = 4
        scanAreaView?.frame = scanRect
        view.addSubview(scanAreaView!)

        // Set delegate and use the default dispatch queue to execute the call back
        captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
        captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]



        // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
        videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        videoPreviewLayer?.frame = view.layer.bounds
        captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)
        view.layer.addSublayer(videoPreviewLayer)

        // Start video capture.
        captureSession?.startRunning()

        // Initialize QR Code Frame to highlight the QR code
        qrCodeFrameView = UIView()
        qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
        qrCodeFrameView?.layer.borderWidth = 2
        view.addSubview(qrCodeFrameView!)
        view.bringSubviewToFront(qrCodeFrameView!)

        // Add a button that will be used to close out of the scan view
        videoBtn.setTitle("Close", forState: .Normal)
        videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
        videoBtn.backgroundColor = UIColor.grayColor()
        videoBtn.layer.cornerRadius = 5.0;
        videoBtn.frame = CGRectMake(10, 30, 70, 45)
        videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
        view.addSubview(videoBtn)


        view.bringSubviewToFront(scanAreaView!)

    }

Please note that the line of interest causing the error is this: captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)

Other things I have tried are passing in a CGRect directly as a parameter and that has caused the same error. I have also passed in scanAreaView!.bounds as a parameter as that is really the exact size/area I am looking for and that also causes the same exact error. I have seen this done in other's code examples online and they do not seem to have the errors I am having. Here are some examples:

AVCaptureSession barcode scan

Xcode AVCapturesession scan Barcode in specific frame (rectOfInterest is not working)

Apple documentation

metadataOutputRectOfInterestForRect

rectOfInterest

Image of scanAreaView I am using as the designated area I am trying to make the only scannable area of the video preview layer:

回答1:

I wasn't really able to clarify the issue with metadataOutputRectOfInterestForRect, however, you can directly set the property as well. You need to the have the resolution in width and height of your video, which you can specify in advance. I quickly used the 640*480 setting. As stated in the documentation, these values have to be

"extending from (0,0) in the top left to (1,1) in the bottom right, relative to the device’s natural orientation".

See https://developer.apple.com/documentation/avfoundation/avcaptureoutput/1616304-metadataoutputrectofinterestforr

Below is the code I tried

var x = scanRect.origin.x/480
var y = scanRect.origin.y/640
var width = scanRect.width/480
var height = scanRect.height/640
var scanRectTransformed = CGRectMake(x, y, width, height)
captureMetadataOutput.rectOfInterest = scanRectTransformed

I just tested it on an iOS device and it seems to work.

Edit

At least I've solved the metadataOutputRectOfInterestForRect problem. I believe you have to do this after the camera has been properly set up and is running, as the camera's resolution is not yet available.

First, add a notification observer method within viewDidLoad()

NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("avCaptureInputPortFormatDescriptionDidChangeNotification:"), name:AVCaptureInputPortFormatDescriptionDidChangeNotification, object: nil)

Then add the following method

func avCaptureInputPortFormatDescriptionDidChangeNotification(notification: NSNotification) {

    captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectOfInterestForRect(scanRect)

}

Here you can then reset the rectOfInterest property. Then, in your code, you can display the AVMetadataObject within the didOutputMetadataObjects function

var rect = videoPreviewLayer.rectForMetadataOutputRectOfInterest(YourAVMetadataObject.bounds)

dispatch_async(dispatch_get_main_queue(),{
     self.qrCodeFrameView.frame = rect
})

I've tried, and the rectangle was always within the specified area.



回答2:

In iOS 9.3.2 I was able to make metadataoutputRectOfInterestForRect work calling it right after startRunning method of AVCaptureSession:

captureSession.startRunning()
let visibleRect = previewLayer.metadataOutputRectOfInterestForRect(previewLayer.bounds)
captureMetadataOutput.rectOfInterest = visibleRect


回答3:

Swift 4:

captureSession?.startRunning()
let scanRect = CGRect(x: 0, y: 0, width: 100, height: 100)
let rectOfInterest = layer.metadataOutputRectConverted(fromLayerRect: scanRect)
metaDataOutput.rectOfInterest = rectOfInterest


回答4:

I managed to create an effect of having a region of interest. I tried all the proposed solutions but the region was either a CGPoint.zero or had inappropriate size (after converting frames to a 0-1 coordinate). It's actually a hack for those who can't get the regionOfInterest to work and doesn't optimize the detection.

In:

func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) 

I have the following code:

let visualCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if self.viewfinderView.frame.contains(visualCodeObject.bounds) { 
    //visual code is inside the viewfinder, you can now handle detection
}


回答5:

/// After

captureSession.startRunning()

/// Add this

if let videoPreviewLayer = self.videoPreviewLayer {
self.captureMetadataOutput.rectOfInterest =
videoPreviewLayer.metadataOutputRectOfInterest(for:
self.getRectOfInterest())


fileprivate func getRectOfInterest() -> CGRect {
        let centerX = (self.frame.width / 2) - 100
        let centerY = (self.frame.height / 2) - 100
        let quadr: CGFloat = 200

        let myRect = CGRect(x: centerX, y: centerY, width: quadr, height: quadr)

        return myRect
    }


回答6:

To read a QRCode/BarCode from a small rect(specific region) from a full camera view.

<br> **Mandatory to keep the below line after (start running)** <br>
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];

[_captureSession startRunning];
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];

Note:

  1. captureMetadataOutput --> AVCaptureMetadataOutput
  2. _videoPreviewLayer --> AVCaptureVideoPreviewLayer
  3. scanRect --> Rect where you want the QRCode to be read.


回答7:

I wrote the following:

videoPreviewLayer?.frame = view.layer.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill

And this worked for me, but I still don't know why.