Iphone 6 camera calibration for OpenCV

2020-06-22 01:24发布

Im developing an iOS Augmented Reality application using OpenCV. I'm having issues creating the camera projection matrix to allow the OpenGL overlay to map directly on top of the marker. I feel this is due to my iPhone 6 camera not being correctly calibrated to the application. I know there is OpenCV code to calibrate webcams etc using the chess board, but I can't find a way to calibrate my embedded iPhone camera.

Is there a way? Or are there known estimate values for iPhone 6? Which include: focal length in x and y, primary point in x and y, along with the distortion coefficient matrix.

Any help will be appreciated.

EDIT:

Deduced values are as follows (using iPhone 6, camera feed resolution 1280x720):

fx=1229
cx=360
fy=1153
cy=640

This code provides an accurate estimate for the focal length and primary points for devices currently running iOS 9.1.

AVCaptureDeviceFormat *format = deviceInput.device.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);

float cx = float(dim.width) / 2.0;
float cy = float(dim.height) / 2.0;

float HFOV = format.videoFieldOfView;
float VFOV = ((HFOV)/cx)*cy;

float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));

NOTE:

I had an initialization issue with this code. I recommend once the values are initialised and correctly set, to save them to a data file and read this file in for the values.

1条回答
啃猪蹄的小仙女
2楼-- · 2020-06-22 02:06

In my non-OpenCV AR application I am using field of view (FOV) of the iPhone's camera to construct the camera projection matrix. It works alright for displaying the Sun path overlaid on top of the camera view. I don't know how much accuracy you need. It could be that knowing only FOV would not be enough you.

iOS API provides a way to get field of view of the camera. I get it as so:

AVCaptureDevice  * camera = ...
AVCaptureDeviceFormat * format = camera.activeFormat;
float fieldOfView = format.videoFieldOfView;

After getting the FOV I compute the projection matrix:

typedef double mat4f_t[16]; // 4x4 matrix in column major order    

mat4f_t projection;
createProjectionMatrix(projection,
                       GRAD_TO_RAD(fieldOfView),
                       viewSize.width/viewSize.height,
                       5.0f,
                       1000.0f);

where

void createProjectionMatrix(
        mat4f_t mout, 
        float fovy,
        float aspect, 
        float zNear,
        float zFar)
{
    float f = 1.0f / tanf(fovy/2.0f);

    mout[0] = f / aspect;
    mout[1] = 0.0f;
    mout[2] = 0.0f;
    mout[3] = 0.0f;

    mout[4] = 0.0f;
    mout[5] = f;
    mout[6] = 0.0f;
    mout[7] = 0.0f;

    mout[8] = 0.0f;
    mout[9] = 0.0f;
    mout[10] = (zFar+zNear) / (zNear-zFar);
    mout[11] = -1.0f;

    mout[12] = 0.0f;
    mout[13] = 0.0f;
    mout[14] = 2 * zFar * zNear /  (zNear-zFar);
    mout[15] = 0.0f;
}
查看更多
登录 后发表回答