I would like to align a (synchronous) depth/color frame pair, using the Google Tango tablet, such that, assuming that both frames have the same resolution, each pixel in the depth frame corresponds to the same pixel in the color frame, i.e., I would like to achieve a retinotopic mapping. How can this be achieved using the latest C API (Hilbert Release Version 1.6)? Any help on this will be greatly appreciated.
相关问题
- How is it possible to get tracked features from ta
- Getting Tango's camera stream data
- Adding ARToolkit Marker tracking into Tango
- Intermittent loss of pose data in Leibniz release
- How to find corner points of any object in point c
相关文章
- How to find corner points of any object in point c
- TangoPoseData 'pose.status_code' always re
- Cannot update Tango Core - “Package file was not s
- Project Tango Pose data producing drift while stat
- How do I begin working on the Project Tango?
- Tango raw depth data - update? [closed]
- Merging Area Description Files for Project Tango
- exactly how do we compute timestamp differentials?
I have not tried this but we can probably do: for each (X,Y,Z) from point cloud:
for distortion correction (k1,k2,k2 can from distortion[] part of TangoInstrinsics, r = Math.sqrt(x^2 + y^2)))
Then we can convert normalized x_corrected, y_corrected to x_raster, y_raster by using reverse of the above formula (x_raster = x_correct*Fx+ cx)
One of your conditions is not possible - there is no guarantee that tango will hand you a point cloud measurement of something in the visual field if it has trouble seeing it - also there isn't a 1:1 correspondence between pixels and depth frame as the depth info is 3D
Generating simple crude UV coordinates to map tango point cloud points back onto source image (texture coordinates) - see comments above for more details, we've messed this thread up but good :-( (Language is C#, classes are .Net) Field of view calculate FOV horizontal (true) or vertical (false)
Mark, thanks for your quick response. Probably my question was a bit inprecise. You are of course damn right in saying that a retinotopic mapping between a 2D and a 3D image cannot be established. Shame on me. Nonetheless, what I need is a mapping in which all depth samples (x_n,y_n,d_n), 1<=n<=N, N being the number of depth values, correspond to the same pixels (x_n,y_n) in the (synchronized) color frame. It is well taken that the depth sensor cannot provide depth information for troublesome areas in the visual field.