I am just doing an example of feature detection in OpenCV. This example is shown below. It is giving me the following error
module' object has no attribute 'drawMatches'
I have checked the OpenCV Docs and am not sure why I'm getting this error. Does anyone know why?
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
Error:
Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'
The
drawMatches
Function is not part of the Python interface.As you can see in the docs, it is only defined for
C++
at the moment.Excerpt from the docs:
If the function had a Python interface, you would find something like this:
EDIT
There actually was a commit that introduced this function 5 months ago. However, it is not (yet) in the official documentation.
Make sure you are using the newest OpenCV Version (2.4.7). For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this:
I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in
opencv\sources\samples\python2\find_obj
This is the output image:
I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the
drawMatches
function doesn't exist in my distribution. I've also tried the second approach withfind_obj
and that didn't work for me either. With that, I decided to write my own implementation of it that mimicsdrawMatches
to the best of my ability and this is what I've produced.I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.
The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale.
I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their
(x,y)
coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together.Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system. If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.
Without further ado:
To illustrate that this works, here are the two images that I used:
I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:
This is the image I get:
To use with
knnMatch
fromcv2.BFMatcher
I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list. However, if you decide to use the
knnMatch
method fromcv2.BFMatcher
for example, what is returned is a list of lists. Specifically, given the descriptors inimg1
calleddes1
and the descriptors inimg2
calleddes2
, each element in the list returned fromknnMatch
is another list ofk
matches fromdes2
which are the closest to each descriptor indes1
. Therefore, the first element from the output ofknnMatch
is a list ofk
matches fromdes2
which were the closest to the first descriptor found indes1
. The second element from the output ofknnMatch
is a list ofk
matches fromdes2
which were the closest to the second descriptor found indes1
and so on.To make the most sense of
knnMatch
, you must limit the total amount of neighbours to match tok=2
. The reason why is because you want to use at least two matched points to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen. You can use a very simple ratio test (credit goes to David Lowe) to ensure that the distance from the first matched point fromdes2
to the descriptor indes1
is some distance away in comparison to the second matched point fromdes2
. Therefore, to turn what is returned fromknnMatch
to what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes. If it does, add the first matched keypoint to a new list.Assuming that you created all of the variables like you did before declaring the
BFMatcher
instance, you'd now do this to adapt theknnMatch
method for usingdrawMatches
:I want to attribute the above modifications to user @ryanmeasel and the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function.