Explain numbers from OpenCV matchShapes()

2020-07-13 11:22发布

问题:

I am developing an app where I compare two images using matchShapes() of OpenCV.

I implemented the method in Objective-C code is below

- (void) someMethod:(UIImage *)image :(UIImage *)temp {

RNG rng(12345);

cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;

src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];

int thresh=150;
double ans=0, result=0;

Mat imageresult1, imageresult2;

cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);

std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;

Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);

findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
    Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
    drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}

findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
    Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
    drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}

for(int i=0;i<contours1.size();i++)
{
    ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
    std::cout<<ans<<" ";
    getchar();
}

}

I got those results but do not know what exactly those numbers mean: 0 0 0.81946 0.816337 0.622353 0.634221 0

回答1:

this blogpost I think should give a lot more insight into how matchShapes works.

You obviously already know what the input parameters are but for anyone finding this that doesn't:

double matchShapes(InputArray contour1, InputArray contour2, int method, double parameter)

The output is a metric where:

The lower the result, the better match it is. It is calculated based on the hu-moment values. Different measurement methods are explained in the docs.

The findings on the blogpost mentioned are as follows: ( max = 1 , min = 0)

I got following results:

    Matching Image A with itself = 0.0
    Matching Image A with Image B = 0.001946
    Matching Image A with Image C = 0.326911

See, even image rotation doesn’t affect much on this comparison.

This basically shows that for your results:

  1. The first two are great, you got a compelte match at 0
  2. The second two (0.81946 0.816337) are quite an incompatible match
  3. the third two are OK at around 62% incompatible
  4. the last one is complete match.

If my computer vision learnings have taught me anything is always be sceptical of a complete match unless you are 100% using the same images.

Edit1: I think it might also be rotationally invarient so in your case you might have three very similar drawn lines that have been rotated to the same way (i.e. horizontal) and compared