I want to rotate an image at several angles sequentially. I do that using cv2.getRotationMatrix2D
and cv2.warpAffine
. Having a pair of pixels coordinates [x,y], where x=cols, y=rows (in this case) I want to find their new coordinates in the rotated images.
I used the following slightly changed code courtesy of http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/ and the explanation from Affine Transformation to try to map the points in the rotated image : http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html.
The problem is my mapping or my rotation is wrong because the transformed calculated coordinates are wrong. (I tried to compute the corners manually for simple verification)
CODE:
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = ((w-1) // 2.0, (h-1)// 2.0)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
print nW, nH
# adjust the rotation matrix to take into account translation
M[0, 2] += ((nW-1) / 2.0) - cX
M[1, 2] += ((nH-1) / 2.0) - cY
# perform the actual rotation and return the image
return M, cv2.warpAffine(image, M, (nW, nH))
#function that calculates the updated locations of the coordinates
#after rotation
def rotated_coord(points,M):
points = np.array(points)
ones = np.ones(shape=(len(points),1))
points_ones = np.concatenate((points,ones), axis=1)
transformed_pts = M.dot(points_ones.T).T
return transformed_pts
#READ IMAGE & CALL FCT
img = cv2.imread("Lenna.png")
points = np.array([[511, 511]])
#rotate by 90 angle for example
M, rotated = rotate_bound(img, 90)
#find out the new locations
transformed_pts = rotated_coord(points,M)
If I have for example the coordinates [511,511]
I will obtain [-0.5, 511.50]
([col, row]) when I expect to obtain [0,511]
.
If I use instead the w // 2
a black border will be added on the image and my rotated updated coordinates will be off again.
Question: How can I find the correct location of a pair of pixels coordinates in a rotated image (by a certain angle) using Python ?