I am doing image stitching in OpenCV, where I am taking pictures of a planar scene from different locations and try to compose a panorama. I have modified the stitching example to fit my needs. The problem with the openCV stitching pipeline is, that it assumes a pure rotation of the camera, which is not the case for me. When the pictures are taken perfectly orthogonal to the scene (no camera rotation, just translation), the result is quite good, but when there are both, camera rotation and translation the results are not satisfying.
I am able to compute the homographies between the camera positions, which can be done because the scene is planar, but I don't really know what the next step is. My idea is to undistort the image using the homography in such a way, what the the camera is facing the plane orthogonally and to apply the stitching next. The problem with this is, that I do not know the true locations of the feature points. How can I go about doing this? Is there anything else I could try to get better stitching results for a planar scene with an arbitrary camera movement?