We get timestamps as a double value for pose, picture, and point data - they aren't always aligned - how do I calculate the temporal distance between two time stamps ? Yes, I know how to subtract two doubles, but I'm not at all sure of how the delta corresponds to time.
相关问题
- How is it possible to get tracked features from ta
- Getting Tango's camera stream data
- Adding ARToolkit Marker tracking into Tango
- Intermittent loss of pose data in Leibniz release
- How to find corner points of any object in point c
相关文章
- How to find corner points of any object in point c
- TangoPoseData 'pose.status_code' always re
- Cannot update Tango Core - “Package file was not s
- Project Tango Pose data producing drift while stat
- How do I begin working on the Project Tango?
- Tango raw depth data - update? [closed]
- Merging Area Description Files for Project Tango
- exactly how do we compute timestamp differentials?
I have some interesting timestamp data that sheds light on your question, without exactly answering it. I have been trying to match up depth frames with image frames - just as a lot of people posting under this Tango tag. My data did not match exactly and I thought there were problems with my projection matrices and point reprojection. Then I checked the timestamps on my depth frames and image frames and found that they were off by as much as 130 milliseconds. A lot! Even though I was getting the most recent image whenever a depth frame was available. So I went back to test just the timestamp data.
I am working in Native with code based on the point-cloud-jni-example. For each of onXYZijAvailable(), onFrameAvailable(), and onPoseAvailable() I am dumping out time information. In the XYZ and Frame cases I am copying the returned data to a static buffer for later use. For this test I am ignoring the buffered image frame, and the XYZ depth data is displayed in the normal OpenGL display loop of the example code. The data captured looks like this:
The system time is from std::chrono::system_clock::now() run inside of each callback. (Offset by a start time at app start.) The timestamp is the actual timestamp data from the XYZij, image, or pose struct. For depth and image I also list the most recent pose timestamp (from start-of-service to device, with given time of 0.0). A quick analysis of about 2 minutes of sample data leads to the following initial conclusions:
That is the actual timestamp data in the returned structs. The "real?" elapsed time between callbacks is much more variable. The pose callback fires anywhere from 0.010 to 0.079 seconds, even though the pose timestamps are rock solid at 0.033. The image (frame) callback fires 4 times at between 0.025 and 0.040 and then gives one long pause of around 0.065. That is where two images with the same timestamp are returned in successive calls. It appears that the camera is skipping a frame?
So, to match depth, image, and pose you really need to buffer multiple returns with their corresponding timestamps (ring buffer?) and then match them up by whichever value you want as master. Pose times are the most stable.
Note: I have not tried to get a pose for a particular "in between" time to see if the returned pose is interpolated between the values given by onPoseAvailable().
I have the logcat file and various awk extracts available. I am not sure how to post those (1000's of lines).
I think the fundamental question would be how to sync the pose, depth and color image data together into a single frame. So to answer that, there are actually two step