OpenCV on iOS: False matching with SurfFeatureDete

2020-07-30 00:28发布

问题:

I am trying to use OpenCV's feature detection tools in order to decide whether a small sample image exists in a larger scene image or not.
I used the code from here as a reference (without the homography part).

UIImage *sceneImage, *objectImage1;
cv::Mat sceneImageMat, objectImageMat1;
cv::vector<cv::KeyPoint> sceneKeypoints, objectKeypoints1;
cv::Mat sceneDescriptors, objectDescriptors1;
cv::SurfFeatureDetector *surfDetector;
cv::FlannBasedMatcher flannMatcher;
cv::vector<cv::DMatch> matches;
int minHessian;
double minDistMultiplier;

minHessian = 400;
minDistMultiplier= 3;
surfDetector = new cv::SurfFeatureDetector(minHessian);

sceneImage = [UIImage imageNamed:@"twitter_scene.png"];
objectImage1 = [UIImage imageNamed:@"twitter.png"];

sceneImageMat = cv::Mat(sceneImage.size.height, sceneImage.size.width, CV_8UC1);
objectImageMat1 = cv::Mat(objectImage1.size.height, objectImage1.size.width, CV_8UC1);

cv::cvtColor([sceneImage CVMat], sceneImageMat, CV_RGB2GRAY);
cv::cvtColor([objectImage1 CVMat], objectImageMat1, CV_RGB2GRAY);

if (!sceneImageMat.data || !objectImageMat1.data) {
    NSLog(@"NO DATA");
}

surfDetector->detect(sceneImageMat, sceneKeypoints);
surfDetector->detect(objectImageMat1, objectKeypoints1);

surfExtractor.compute(sceneImageMat, sceneKeypoints, sceneDescriptors);
surfExtractor.compute(objectImageMat1, objectKeypoints1, objectDescriptors1);

flannMatcher.match(objectDescriptors1, sceneDescriptors, matches);

double max_dist = 0; double min_dist = 100;

for( int i = 0; i < objectDescriptors1.rows; i++ )
{ 
    double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
}

cv::vector<cv::DMatch> goodMatches;
for( int i = 0; i < objectDescriptors1.rows; i++ )
{ 
    if( matches[i].distance < minDistMultiplier*min_dist )
    { 
        goodMatches.push_back( matches[i]);
    }
}
NSLog(@"Good matches found: %lu", goodMatches.size());

cv::Mat imageMatches;
cv::drawMatches(objectImageMat1, objectKeypoints1, sceneImageMat, sceneKeypoints, goodMatches, imageMatches, cv::Scalar::all(-1), cv::Scalar::all(-1),
                cv::vector<char>(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

for( int i = 0; i < goodMatches.size(); i++ )
{
    //-- Get the keypoints from the good matches
    obj.push_back( objectKeypoints1[ goodMatches[i].queryIdx ].pt );
    scn.push_back( objectKeypoints1[ goodMatches[i].trainIdx ].pt );
}

cv::vector<uchar> outputMask;
cv::Mat homography = cv::findHomography(obj, scn, CV_RANSAC, 3, outputMask);
int inlierCounter = 0;
for (int i = 0; i < outputMask.size(); i++) {
    if (outputMask[i] == 1) {
        inlierCounter++;
    }
}
NSLog(@"Inliers percentage: %d", (int)(((float)inlierCounter / (float)outputMask.size()) * 100));

cv::vector<cv::Point2f> objCorners(4);
objCorners[0] = cv::Point(0,0);
objCorners[1] = cv::Point( objectImageMat1.cols, 0 );
objCorners[2] = cv::Point( objectImageMat1.cols, objectImageMat1.rows );
objCorners[3] = cv::Point( 0, objectImageMat1.rows );

cv::vector<cv::Point2f> scnCorners(4);

cv::perspectiveTransform(objCorners, scnCorners, homography);

cv::line( imageMatches, scnCorners[0] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[1] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar(0, 255, 0), 4);
cv::line( imageMatches, scnCorners[1] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[2] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);
cv::line( imageMatches, scnCorners[2] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[3] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);
cv::line( imageMatches, scnCorners[3] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[0] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);

[self.mainImageView setImage:[UIImage imageWithCVMat:imageMatches]];

This works, but I keep getting a significant amount of matches, even when the small image is not part of the larger one.
Here's an example for a good output:

And here's an example for a bad output:

Both outputs are the result of the same code. Only difference is the small sample image.
With results like this, it is impossible for me to know when a sample image is NOT in the larger image.
While doing my research, I found this stackoverflow question. I followed the answer given there, and tried the steps suggested in the "OpenCV 2 Computer Vision Application Programming Cookbook" book, but I wasn't able to make it work with images of different sizes (seems like a limitation of the cv::findFundamentalMat function).

What am I missing? Is there a way to use SurfFeatureDetector and FlannBasedMatcher to know when one sample image is a part of a larger image, and another sample image isn't? Is there a different method which is better for that purpose?

UPDATE:
I updated the code above to include the complete function I use, including trying to actually draw the homography. Plus, here are 3 images - 1 scene, and two small objects I'm trying to find in the scene. I'm getting better inlier percentages for the paw icon, and not the twitter icon, which is actually IN the scene. Plus, the homography is not drawn for some reason:
Twitter Icon
Paw Icon
Scene

回答1:

Your matcher will always match every point from the smaller descriptor list to one of the larger list. You then have to look for yourself which of these matches make sense and which not. You can do this by discarding every match that exceeds a maximum allowed descriptor distance, or you can try to find a transformation matrix (e.g. with findHomography) and check if enough matches correspond to it.



回答2:

It's a old post , but from a similar assignment I had to do for class. A way to remove the bad output is to check that most of the matching lines are parallel(relatively) to each other, and remove matches that point in wrong directions.