给定一组二维点,我该怎么申请相反undistortPoints
?
我有相机内在和distCoeffs
并希望(例如)创建一个正方形,或歪曲如同照相机已通过透镜观看它。
我已经找到了“扭曲”补丁在这里: http://code.opencv.org/issues/1387但它似乎这只是针对图像好,我想在稀疏点工作。
给定一组二维点,我该怎么申请相反undistortPoints
?
我有相机内在和distCoeffs
并希望(例如)创建一个正方形,或歪曲如同照相机已通过透镜观看它。
我已经找到了“扭曲”补丁在这里: http://code.opencv.org/issues/1387但它似乎这只是针对图像好,我想在稀疏点工作。
这个问题是比较旧的,但因为我在这里结束了从谷歌搜索没有看到一个整洁的答案,我决定无论如何回答。
有一个名为函数projectPoints
所做的正是这一点。 C版本与类似功能的估计照相机参数时由OpenCV的内部使用calibrateCamera
和stereoCalibrate
编辑:
使用2D点作为输入,我们可以设置与所有z坐标为1 convertPointsToHomogeneous
和使用projectPoints
没有旋转和平移没有。
cv::Mat points2d = ...;
cv::Mat points3d;
cv::Mat distorted_points2d;
convertPointsToHomogeneous(points2d, points3d);
projectPoints(points3d, cv::Vec3f(0,0,0), cv::Vec3f(0,0,0), camera_matrix, dist_coeffs, distorted_points2d);
一个简单的解决方案是使用initUndistortRectifyMap
以获得无失真的从坐标的地图以扭曲的:
cv::Mat K = ...; // 3x3 intrinsic parameters
cv::Mat D = ...; // 4x1 or similar distortion parameters
int W = 640; // image width
int H = 480; // image height
cv::Mat mapx, mapy;
cv::initUndistortRectifyMap(K, D, cv::Mat(), K, cv::Size(W, H),
CV_32F, mapx, mapy);
float distorted_x = mapx.at<float>(y, x);
float distorted_y = mapy.at<float>(y, x);
我编辑澄清的代码是正确的:
我举的文档initUndistortRectifyMap
:
对于每个像素(U,V)在目的地(校正和整流)图像,所述函数计算所述源图像中相应的坐标(即,从相机原始图像英寸
map_x(U,V)= + x''f_x c_x
map_y(U,V)= + y''f_y c_y
我有完全相同的需求。 这里是一个可能的解决方案:
void MyDistortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst,
const cv::Mat & cameraMatrix, const cv::Mat & distorsionMatrix)
{
dst.clear();
double fx = cameraMatrix.at<double>(0,0);
double fy = cameraMatrix.at<double>(1,1);
double ux = cameraMatrix.at<double>(0,2);
double uy = cameraMatrix.at<double>(1,2);
double k1 = distorsionMatrix.at<double>(0, 0);
double k2 = distorsionMatrix.at<double>(0, 1);
double p1 = distorsionMatrix.at<double>(0, 2);
double p2 = distorsionMatrix.at<double>(0, 3);
double k3 = distorsionMatrix.at<double>(0, 4);
//BOOST_FOREACH(const cv::Point2d &p, src)
for (unsigned int i = 0; i < src.size(); i++)
{
const cv::Point2d &p = src[i];
double x = p.x;
double y = p.y;
double xCorrected, yCorrected;
//Step 1 : correct distorsion
{
double r2 = x*x + y*y;
//radial distorsion
xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
yCorrected = y * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
//tangential distorsion
//The "Learning OpenCV" book is wrong here !!!
//False equations from the "Learning OpenCv" book
//xCorrected = xCorrected + (2. * p1 * y + p2 * (r2 + 2. * x * x));
//yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x);
//Correct formulae found at : http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
xCorrected = xCorrected + (2. * p1 * x * y + p2 * (r2 + 2. * x * x));
yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x * y);
}
//Step 2 : ideal coordinates => actual coordinates
{
xCorrected = xCorrected * fx + ux;
yCorrected = yCorrected * fy + uy;
}
dst.push_back(cv::Point2d(xCorrected, yCorrected));
}
}
void MyDistortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst,
const cv::Matx33d & cameraMatrix, const cv::Matx<double, 1, 5> & distorsionMatrix)
{
cv::Mat cameraMatrix2(cameraMatrix);
cv::Mat distorsionMatrix2(distorsionMatrix);
return MyDistortPoints(src, dst, cameraMatrix2, distorsionMatrix2);
}
void TestDistort()
{
cv::Matx33d cameraMatrix = 0.;
{
//cameraMatrix Init
double fx = 1000., fy = 950.;
double ux = 324., uy = 249.;
cameraMatrix(0, 0) = fx;
cameraMatrix(1, 1) = fy;
cameraMatrix(0, 2) = ux;
cameraMatrix(1, 2) = uy;
cameraMatrix(2, 2) = 1.;
}
cv::Matx<double, 1, 5> distorsionMatrix;
{
//distorsion Init
const double k1 = 0.5, k2 = -0.5, k3 = 0.000005, p1 = 0.07, p2 = -0.05;
distorsionMatrix(0, 0) = k1;
distorsionMatrix(0, 1) = k2;
distorsionMatrix(0, 2) = p1;
distorsionMatrix(0, 3) = p2;
distorsionMatrix(0, 4) = k3;
}
std::vector<cv::Point2d> distortedPoints;
std::vector<cv::Point2d> undistortedPoints;
std::vector<cv::Point2d> redistortedPoints;
distortedPoints.push_back(cv::Point2d(324., 249.));// equals to optical center
distortedPoints.push_back(cv::Point2d(340., 200));
distortedPoints.push_back(cv::Point2d(785., 345.));
distortedPoints.push_back(cv::Point2d(0., 0.));
cv::undistortPoints(distortedPoints, undistortedPoints, cameraMatrix, distorsionMatrix);
MyDistortPoints(undistortedPoints, redistortedPoints, cameraMatrix, distorsionMatrix);
cv::undistortPoints(redistortedPoints, undistortedPoints, cameraMatrix, distorsionMatrix);
//Poor man's unit test ensuring we have an accuracy that is better than 0.001 pixel
for (unsigned int i = 0; i < undistortedPoints.size(); i++)
{
cv::Point2d dist = redistortedPoints[i] - distortedPoints[i];
double norm = sqrt(dist.dot(dist));
std::cout << "norm = " << norm << std::endl;
assert(norm < 1E-3);
}
}
undistortPoint是项目点简单的反向版本
在我来说,我想这样做这些:
undistort点:
int undisortPoints(const vector<cv::Point2f> &uv, vector<cv::Point2f> &xy, const cv::Mat &M, const cv::Mat &d)
{
cv::undistortPoints(uv, xy, M, d, cv::Mat(), M);
return 0;
}
这将undistort的指向非常相似的坐标原点的形象,但不失真。 这是品种的默认行为:: undistort()函数。
redistort点:
int distortPoints(const vector<cv::Point2f> &xy, vector<cv::Point2f> &uv, const cv::Mat &M, const cv::Mat &d)
{
vector<cv::Point2f> xy2;
vector<cv::Point3f> xyz;
cv::undistortPoints(xy, xy2, M, cv::Mat());
for (cv::Point2f p : xy2)xyz.push_back(cv::Point3f(p.x, p.y, 1));
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1);
cv::Mat tvec = cv::Mat::zeros(3, 1, CV_64FC1);
cv::projectPoints(xyz, rvec, tvec, M, d, uv);
return 0;
}
这里有点棘手的事情是第一个项目点到z = 1个平面与直线的相机型号。 之后,你与原来的相机型号其投影。
我发现这些有用的,我希望它也适用于你。