I have a set of 2D image
keypoints that are outputted from the OpenCV FAST
corner detection function. Using an Asus Xtion I
also have a time-synchronised depth map with all camera calibration parameters known. Using this information I would like to extract a set of 3D
coordinates (point cloud) in OpenCV.
Can anyone give me any pointers regarding how to do so? Thanks in advance!
Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect.
http://nicolas.burrus.name/index.php/Research/KinectCalibration
I'll copy & paste the most important parts:
If you are further interested in stereo mapping (values for kinect):