I am using kinect recently to find distance of some markers, so i'm stuck in converting kinect rgb and depth images that are in pixel, to real world coordinate xyz that a want in meters.
相关问题
- How to get the background from multiple images by
- How to get the bounding box of text that are overl
- Matlab SURF points to pixel coordinates
- Create depth map from 3d points
- Tensorflow error: FailedPeconditionError: attempti
相关文章
- how to calculate field of view of the camera from
- Fastest way to compute image dataset channel wise
- Which computer vision library & algorithm(s), for
- Image Enhancement using combination between SVD an
- How to get the transformation matrix of a 3d model
- World of Warcraft image processing
- Computer Vision / Augmented Reality: how to overla
- How to implement a conditional random field based
Please, note that in Kinect SDK 1.8 (Kinect 1), it's not possible to convert from RGB image space to world space: only from depth image space to world space. Other possible conversions are:
So, to convert, you use the coordinate mapper included in the SDK (I'm assuming you're using Microsoft SDK and not OpenNI, AS3NUI or EuphoriaNI). Here is a sample on how to convert from world space to RGB space, taken from here:
This sample is in C# for Kinect SDK 2.0. To see another sample for SDK 1.8, as well as a short discussion on the use of the coordinate mapper, you can see this article: Understanding Kinect Coordinate Mapping.
To convert from RGB image space to world coordinate space (only with Kinect 2 and SDK 2.0), you can use this method:
You have to pass the entire depth frame (not color frame!!) and an array where it will return the world coordinates of every color frame pixel. This array, of course, must be large enough to contain all points (1920 * 1080 = 2 073 600 entries at max resolution), then you find the coordinate of a point using a simple formula:
You can use the depthToPointCloud function in the Computer Vision System Toolbox for MATLAB.