I wish to convert a MATLAB stereoParameters structure to intrinsics and extrinsics matrices to use in OpenCV's stereoRectify.
If I understood http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html and http://mathworks.com/help/vision/ref/stereoparameters-class.html , stereoParameters.CameraParameters1 and stereoParameters.CameraParameters2 store the intrinsic matrices and the other members of stereoParameters the extrinsic ones.
I think I got this mapping
Intrinsics:
- cameraMatrix1 = stereoParameters.CameraParameters1.IntrinsicMatrix'
- cameraMatrix2 = stereoParameters.CameraParameters2.IntrinsicMatrix'
- distCoeffs1 = [stereoParameters.CameraParameters1.RadialDistortion(1:2), stereoParameters.CameraParameters1.TangentialDistortion, stereoParameters.CameraParameters1.RadialDistortion(3)]
- distCoeffs2 = [stereoParameters.CameraParameters2.RadialDistortion(1:2), stereoParameters.CameraParameters2.TangentialDistortion, stereoParameters.CameraParameters2.RadialDistortion(3)]
Extrinsics:
- R = stereoParameters.RotationOfCamera2'
- T = stereoParameters.TranslationOfCamera2'
Is that correct, so far?
Still, I can't see how to get
- R1 (3x3)
- R2 (3x3)
- P1 (3x4)
- P2 (3x4)
- Q (4x4)
matrices from the rest of stereoParameters.
Is there an existing converter I can use, and if not, what are the formulas?
As you already found out, both camera matrices need to be transposed due to a different notation in MATLAB
and OpenCV
The same applies for the rotation matrix and the translation vector between the cameras: stereoParams.RotationOfCamera2
and stereoParams.TranslationOfCamera2
need to be transposed in order to obtain OpenCV's R matrix and T vector.
(Quick validation: R should be close to an identity matrix if the cameras are almost parallel and the first element of T should match your baseline between the cameras in millimeters.)
OpenCV's distortion coefficient vector is composed of MATLAB's two tangential distortion coefficients followed by the two radial distortion coefficients.
That said, I was able to compute correct R1, R2, P1, P2 and Q using (R1, R2, P1, P2, Q, leftROI, rightROI) = cv2.stereoRectify(leftCamMatrix, leftDistCoeffs, rightCamMatrix, rightDistCoeffs, imageSize, R, T, None, None, None, None, None, cv2.CALIB_ZERO_DISPARITY, 0)
Note that for data type reasons, the disparity values obtained using OpenCV's stereo matcher need to be divided by 16 and the coordinates in the 3d point cloud returned by cv2.reprojectImageTo3D
need to be divided by 64 to obtain metric values.
(Quick validation: when grabbing the coordinates of the same object in the rectified left and right image, the y-coordinates should be almost equal and you should be able to compute the object distance in meters with f*B/(x_right-x_left)/1000 with f being a combined focal length of a virtual camera in Q and B the baseline in millimeters.)
https://stackoverflow.com/a/28317841 gives the formula for the Q matrix:
Tx is from matrix T. cx, cy and cx' are from the camera matrices. f is some sensible combination of their x and y focal lengths.
Still dunno how to get P1, P2, R1 and R2 though. Anybody?