Correct way for fisheye images undistortion

2020-07-26 13:59发布

问题:

I've done some sample program that removes lens distortion from the chessboard like fish eye images, and it works ok, here's the screen shot

Next I wanted to use the fish eye chessboard pattern (right image above) in order to remove the same lens distortion but from the real image and had no luck - the curvature still remains on the undistorted image, thus I got this instead

the code

void getObjectPoints(cv::Size, std::vector<std::vector<cv::Point3f>>&);

bool getImagePoints(cv::Mat&, cv::Size&, std::vector<std::vector<cv::Point2f>>&);

void runCalibration(cv::Mat& image, cv::Matx33d&, cv::Vec4d&);

cv::Mat removeFisheyeLensDist(cv::Mat&, cv::Matx33d&, cv::Vec4d&);

// ... definitions    
void getObjectPoints(cv::Size patternSize, std::vector<std::vector<cv::Point3f>>& objectPoints)
{
    const float squareSize = 0.0015f;
    std::vector<cv::Point3f> knownBoardPositions;
    for (int i = 0; i < patternSize.height; ++i)
    {
        for (int j = 0; j < patternSize.width; ++j)
        {
            knownBoardPositions.push_back(cv::Point3f(j*squareSize, i*squareSize, 0.0f));
        }
    }
    if (knownBoardPositions.size() > 0)
        objectPoints.push_back(knownBoardPositions);
}

bool getImagePoints(cv::Mat& image, cv::Size& patternSize, std::vector<std::vector<cv::Point2f>>& imagePoints)
{
    bool patternFound = false;
    while (!patternFound)
    {
        std::vector<cv::Point2f> corners;
        for (int i = 7; i <= 30; ++i)
        {
            int w = i;
            int h = i - 2;

            patternFound = cv::findChessboardCorners(image, cv::Size(w, h), corners,
                                              cv::CALIB_CB_ADAPTIVE_THRESH | cv::CALIB_CB_NORMALIZE_IMAGE);
            if (patternFound)
            {
                patternSize.width = w;
                patternSize.height = h;
                imagePoints.push_back(corners);
                break;
            }
        }
    }

    return patternFound;
}

void runCalibration(cv::Mat& image, cv::Matx33d& K, cv::Vec4d& D)
{
    std::vector< std::vector<cv::Point2f> > imagePoints;
    std::vector< std::vector<cv::Point3f> > objectPoints;
    cv::Size patternSize;
    bool patternFound = getImagePoints(image, patternSize, imagePoints);

    if (patternFound)
    {
        getObjectPoints(patternSize, objectPoints);

        std::vector<cv::Vec3d> rvecs;
        std::vector<cv::Vec3d> tvecs;
        cv::fisheye::calibrate(
            objectPoints,
            imagePoints,
            image.size(),
            K,
            D,
            rvecs,
            tvecs,
            cv::fisheye::CALIB_FIX_SKEW |   cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC
        |   cv::fisheye::CALIB_FIX_K1   |   cv::fisheye::CALIB_FIX_K2
        |   cv::fisheye::CALIB_FIX_K3   |   cv::fisheye::CALIB_FIX_K4
//            cv::TermCriteria(3, 20, 1e-6)
        );
    }
}

cv::Mat removeFisheyeLensDist(cv::Mat& distorted, cv::Matx33d& K, cv::Vec4d& D)
{
    cv::Mat undistorted;
    cv::Matx33d newK = K;
    cv::fisheye::undistortImage(distorted, undistorted, K, D, newK);
    return undistorted;
}

int main(int argc, char* argv[])
{
    cv::Mat chessBoardPattern = //..
    cv::Mat distortedImage = //...
    cv::imshow("distorted", distortedImage);

    cv::Matx33d K;  cv::Vec4d D;
    runCalibration(chessBoardPattern, K, D);
    cv::Mat undistoredImage = removeFisheyeLensDist(distortedImage, K, D);    
    cv::imshow("undistored", undistoredImage);
    cv::waitKey(0);
    return 0;
}

As I think the image with the tower has very simillar curvatures as a chessboard on the right so the same pattern should've work for tower image ...

What Am I doing wrong there ? And Why it's not fixing lens distortion for the tower image ?

回答1:

Unfortunately your assumption

if images have same curvatures thus the camera parameters should be approximately same and so I can undistort fisheye emage with a chessboard pattern

is wrong. Even cameras of same model will have differences in focal distance, lens geometry and placement, etc, which need to be calibrated individually. Besides during usage of camera these parameters may change due to heating, vibration and other effects (usually this is ignored in practice).

To undistort your image without having access to camera all you can do is just select some simple fisheye camera model and try to estimate parameters manually, trying to make straight lines look straight (for example using GUI with sliders for all parameters). This can be tedious, but I am not aware of better options. Besides, some image editing software may have tools for that (if I recall correctly GIMP does)



标签: c++ opencv