Knowing resolution of AVCaptureSession's sessi

2020-05-20 08:29发布

I'm accessing the camera in iOS and using session presets as so:

captureSession.sessionPreset = AVCaptureSessionPresetMedium;

Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.

So, something like this (theoretically):

[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];

which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).

8条回答
干净又极端
2楼-- · 2020-05-20 08:51

According to Apple, there's no API for that. It sucks, I've had the same problem.

查看更多
爱情/是我丢掉的垃圾
3楼-- · 2020-05-20 08:52

May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...

[[UIDevice currentDevice] platformType]   // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: @"iPhone 4G"

However, you have to update the list for each newer device model. Hope this helps :)

查看更多
我命由我不由天
4楼-- · 2020-05-20 08:54

FYI, I attach here an official reply from Apple.


This is a follow-up to Bug ID# 13201137.

Engineering has determined that this issue behaves as intended based on the following information:

There are several problems with the included code:

1) The AVCaptureSession has no inputs. 2) The AVCaptureSession has no outputs.

Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.

3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.

There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.

You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.

4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.

Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.

We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).

Thank you for taking the time to notify us of this issue.

Best Regards,

Developer Bug Reporting Team Apple Worldwide Developer Relations

查看更多
对你真心纯属浪费
5楼-- · 2020-05-20 08:55

You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat

private func getCaptureResolution() -> CGSize {
    // Define default resolution
    var resolution = CGSize(width: 0, height: 0)

    // Get cur video device
    let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice

    // Set if video portrait orientation
    let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown

    // Get video dimensions
    if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
        let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
        resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
        if (portraitOrientation) {
            resolution = CGSize(width: resolution.height, height: resolution.width)
        }
    }

    // Return resolution
    return resolution
}
查看更多
beautiful°
6楼-- · 2020-05-20 08:55

Apple is using 4:3 ratio for the iPhone camera.

You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.

4:3 Ratio

In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.

In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.

查看更多
我只想做你的唯一
7楼-- · 2020-05-20 09:01

The best way to do what you want (get a known video or image format) is to set the format of the capture device.

First find the capture device you want to use:

       if #available(iOS 10.0, *) {
        captureDevice = defaultCamera()
    } else {
        let devices = AVCaptureDevice.devices()
        // Loop through all the capture devices on this phone
        for device in devices {
            // Make sure this particular device supports video
            if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
                // Finally check the position and confirm we've got the back camera
                if((device as AnyObject).position == AVCaptureDevice.Position.back) {
                    captureDevice = device as AVCaptureDevice
                }
            }
        }
    }
    self.autoLevelWindowCenter = ALCWindow.frame
    if captureDevice != nil && currentUser != nil {
        beginSession()
    }
}




    func defaultCamera() -> AVCaptureDevice? {
    if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
        if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
                                                             for: AVMediaType.video,
                                                             position: .back) {
            return device
        } else {
            return nil
        }
    } else {
        return nil
    }
}

Then find the formats that that device can use:

        let options = captureDevice!.formats
    var supportable = options.first as! AVCaptureDevice.Format
    for format in options {
        let testFormat = format 
        let description = testFormat.description
        if (description.contains("60 fps") && description.contains("1280x 720")){
            supportable = testFormat
        }
    }

You can do more complex parsing of the formats, but you might not care.

Then just set the device to that format:

do {
    try captureDevice?.lockForConfiguration()
    captureDevice!.activeFormat = supportable
    // setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
    try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
    // add the device to an active CaptureSession
}

You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.

Hope this helps

查看更多
登录 后发表回答