Find [x,y] rotated coordinates locations in image

2020-07-25 10:21发布

问题:

I want to rotate an image at several angles sequentially. I do that using cv2.getRotationMatrix2D and cv2.warpAffine. Having a pair of pixels coordinates [x,y], where x=cols, y=rows (in this case) I want to find their new coordinates in the rotated images.

I used the following slightly changed code courtesy of http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/ and the explanation from Affine Transformation to try to map the points in the rotated image : http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html.

The problem is my mapping or my rotation is wrong because the transformed calculated coordinates are wrong. (I tried to compute the corners manually for simple verification)

CODE:

def rotate_bound(image, angle):
    # grab the dimensions of the image and then determine the
    # center
    (h, w) = image.shape[:2]
    (cX, cY) = ((w-1) // 2.0, (h-1)// 2.0)


# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])

# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
print nW, nH

# adjust the rotation matrix to take into account translation
M[0, 2] += ((nW-1) / 2.0) - cX
M[1, 2] += ((nH-1) / 2.0) - cY

# perform the actual rotation and return the image
return M, cv2.warpAffine(image, M, (nW, nH))

#function that calculates the updated locations of the coordinates
#after rotation
def rotated_coord(points,M):
    points = np.array(points)
    ones = np.ones(shape=(len(points),1))
    points_ones = np.concatenate((points,ones), axis=1)
    transformed_pts = M.dot(points_ones.T).T
    return transformed_pts

#READ IMAGE & CALL FCT
img = cv2.imread("Lenna.png")
points = np.array([[511,  511]])
#rotate by 90 angle for example
M, rotated = rotate_bound(img, 90)
#find out the new locations
transformed_pts = rotated_coord(points,M)

If I have for example the coordinates [511,511] I will obtain [-0.5, 511.50] ([col, row]) when I expect to obtain [0,511].

If I use instead the w // 2 a black border will be added on the image and my rotated updated coordinates will be off again.

Question: How can I find the correct location of a pair of pixels coordinates in a rotated image (by a certain angle) using Python ?

回答1:

For this case of image rotation, where the image size changes after rotation and also the reference point, the transformation matrix has to be modified. The new with and height can be calculated using the following relations:

new.width = h*\sin(\theta) + w*\cos(\theta)

new.height = h*\cos(\theta) + w*\sin(\theta)

Since the image size changes, because of the black border that you might see, the coordinates of the rotation point (centre of the image) change too. Then it has to be taken into account in the transformation matrix.

I explain an example in my blog image rotation bounding box opencv

def rotate_box(bb, cx, cy, h, w):  
    new_bb = list(bb)                                                                                                                                                 
    for i,coord in enumerate(bb):
        # opencv calculates standard transformation matrix                                                                                                            
        M = cv2.getRotationMatrix2D((cx, cy), theta, 1.0)
        # Grab  the rotation components of the matrix)                                                                                                                
        cos = np.abs(M[0, 0])
        sin = np.abs(M[0, 1])                                                                                                                                         
        # compute the new bounding dimensions of the image                                                                                                            
        nW = int((h * sin) + (w * cos))
        nH = int((h * cos) + (w * sin))
        # adjust the rotation matrix to take into account translation
        M[0, 2] += (nW / 2) - cx
        M[1, 2] += (nH / 2) - cy
        # Prepare the vector to be transformed 
        v = [coord[0],coord[1],1]
        # Perform the actual rotation and return the image
        calculated = np.dot(M,v)
        new_bb[i] = (calculated[0],calculated[1]) 
        return new_bb   


 ## Calculate the new bounding box coordinates
 new_bb = {}
 for i in bb1: 
 new_bb[i] = rotate_box(bb1[i], cx, cy, heigth, width)


回答2:

The corresponding C++ code of the above mentioned Python code of @ cristianpb, if someone is looking for a C++ code as like me:

 // send the original angle i.e. don't transform it in radian
        cv::Point2f rotatePointUsingTransformationMat(const cv::Point2f& inPoint, const cv::Point2f& center, const double& rotAngle)
        {
            cv::Mat rot = cv::getRotationMatrix2D(center, rotAngle, 1.0);
            float cos = rot.at<double>(0,0);
            float sin = rot.at<double>(0,1);
            int newWidth = int( ((center.y*2)*sin) +  ((center.x*2)*cos) );
            int newHeight = int( ((center.y*2)*cos) +  ((center.x*2)*sin) );

            rot.at<double>(0,2) += newWidth/2.0 - center.x;
            rot.at<double>(1,2) += newHeight/2.0 - center.y;

            int v[3] = {static_cast<int>(inPoint.x),static_cast<int>(inPoint.y),1};
            int mat3[2][1] = {{0},{0}};

            for(int i=0; i<rot.rows; i++)
            {
                for(int j=0; j<= 0; j++)
                {
                    int sum=0;
                    for(int k=0; k<3; k++)
                    {
                        sum = sum + rot.at<double>(i,k) * v[k];
                    }
                    mat3[i][j] = sum;
                }
            }
            return Point2f(mat3[0][0],mat3[1][0]);
        }