I have Kinect and drivers for Windows and MacOSX. Are there any examples of gesture recognitions streamed from Kinect using OpenCV API? I'm trying to achieve similar to DaVinci prototype on Xbox Kinect but in Windows and MacOSX.
相关问题
- How to get the background from multiple images by
- Finding k smallest elements in a min heap - worst-
- binary search tree path list
- Try to load image with Highgui.imread (OpenCV + An
- CV2 Image Error: error: (-215:Assertion failed) !s
相关文章
- What are the problems associated to Best First Sea
- Coin change DP solution to keep track of coins
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- Algorithm for maximizing coverage of rectangular a
- opencv fails to build with ipp support enabled
- Code completion is not working for OpenCV and Pyth
- How to measure complexity of a string?
mage dest = new Image(this.bitmap.Width, this.bitmap.Height); CvInvoke.cvThreshold(src, dest, 220, 300, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY); Bitmap nem1 = new Bitmap(dest.Bitmap); this.bitmap = nem1; Graphics g = Graphics.FromImage(this.bitmap);
I did as per your algorithm but it does not work What is wring?
The demo from your link doesn't seem to use real gesture recognition. It just distinguishes between two different hand positions (open/closed), which is much easier, and tracks the hand position. Given the way he holds his hands in the demo (in front of the body, facing the kinect when they are open), here is probably what he is doing. Since you didn't precise which language you are using I'll use the C function names in openCV, but they should be similar in other languages. I'll also assume that you are able to get the depth map from the kinect (probably via a callback function if you use libfreenect).
Threshold on the depth to select only the points close enough (the hands). You can achieve that either yourself, or directly using openCV to get a binary image (cvThreshold() with CV_THRESH_BINARY). Display the image you obtain after thresholding and adjust the threshold value to fit your configuration (try to avoid being too close to the kinect since there is more interference in this area).
Get the contour of the hands with cvFindContour()
This the basis. Now that you have the hands contours, depending on what you want to do you can take different directions. If you just want do detect between hand open and closed, you can probably do:
Get the convex hull of the hands using cvConvexHull2()
Get the convexity defects using cvConvexityDefect() on the contours and the convex hull you got before.
Analyze the convexity defects: if there are big defects the hand is open (because the shape is concave between the fingers), if not the hand is closed.
But you could also do finger detection! That's what I did last week, that doesn't require much more effort and would probably boost your demo! A cheap but pretty reliable way to do that is:
Approximate the hand contours with a polygon. Use cvApproxPoly() on the contour. You'll have to adjust the accuracy parameter to have a polygon as simple as possible but that doesn't blend the fingers together (around 15 should be quite good, but draw it on you image using cvDrawContours() to check what you obtain).
Analyze the contour to find sharp convex angles. You'll have to do that by hand. This is the most tricky part, because:
Here you are, the sharp convex angles are your fingertips!
This is a simple algorithm to detect the fingers, but there are many ways to boost it. For instance you can try to apply a median filter on the depth map to "smooth" everything a bit, or try to use a more accurate polygon approximation but then filter the contour to merge the points which are to close on the finger tips, etc.
Good luck and have fun!
I think it wont be this simple mainly because the depth image data from kinect is not so sensitive. So after a distance of 1m to 1.5m all the fingers will be merged and hence you wont be able to get a clear contours to detect the fingers