Since then I was able to refine it considerably. The key was to use the hand's contour and then combine this information with the points in the convex hull. Here is a video that showcases the current algorithm:

1. For each point in the hull (candidates for fingertips), find the nearest point in the contour curve. Let's call this set C.

2. For each point c in C, take the two points (p1, p2) in the two different directions along the contour that are in a given distance to c (the ideal distance has to be found experimentally and depends on the handshape's size).

3. If these three points are aligned, then it's not a fingertip point. To find out if they are aligned, find the center of p1 and p2 and calculate the distance to c. If this distance is bigger than a certain value (to be found experimentally), the points are not on a line and the cantidate point c is a fingertip point.

Illustration:

c is a fingertip, while c' is not.

This version also works when the fingers are pointing down:

Next I will try to find skin areas with the RGB camera, then use these areas to find hand shapes. And the current code base needs some heavy refactoring.

Sorry, I have some question about this algorithm.

ReplyDeleteIn step one, the nearest point in the contour curve meaning there are just two point in this set? If not, how do you segment the hull.

Like set C and set C' ..

Thanks! Berry

The convex hull contains only few points. There is a step that combines two points in the hull if they are too close to each other, again reducing the number of possible finger point locations. The points are called C.

ReplyDeleteThen for each point in this set C (my guess is around 10-15 points) you take the point in the contour (lots of points) that nearest to this point. And then from this point there are 2 points on the contour in the two directions in a given distance, p1 and p2.

I see, thanks!

ReplyDeleteBut when you use the yellow line to describe the hand contour, you have configure a area as the detect region, right?

And according your algorithm, does it necessary to detect the center of palm ??

Thanks!

Berry

No problem.

ReplyDeleteYes, the area is found by first filtering a depth segment (500-800mm is default) and then performing a k-means clustering for these points. Like this the two hands are seperated. Then all points in the cluster areas are used to find the hull, contour etc...

Hello..I have a question:

ReplyDeleteAre you using C#?

If so then did you use Emgu CV?

The last one, what kind of data type did you use? PointF or Point.

Hi Anonymous

ReplyDeleteYes, I'm using c# and no I didn't use any 3rd party CV library. Just Kinect SDK and OpenNI. I use a Point class I wrote myself.

- Stefan

hi stefan,

ReplyDeleteCurrently my way to find your fingers is based on finding the maximum and minimum points of a function, however I do it based on contour points. so far had not taken account of the vertices of the convex hull.

You raise:

take each point C (vertices) of the convex hull and look inside the boundary points (P1, P2) that are not aligned to C.

Is that correct?

Thank you.

Hi Anonymous

ReplyDeleteForeach point of the convex hull (I'm merging some of them when they are close together first), I take the points p1 and p2 in a given distance from C and see if they form a line. If not, it's a finger point.

- Stefan

Hi stefan

ReplyDeletefor each point of the convex hull must go through the list of contour and determine which pair of points (P1 and P2) is not a line and are at a distance x

Exactly

ReplyDeletehi

ReplyDeletestefan

I had a drawback always recognize a false point below the hand and I could not delete this, any recommendation for this problem.

What algorithm to use to find the point of the convex hull closest to the contour?

ReplyDelete@Anonymous1: It helps to roll back your sleeves

ReplyDelete@Anonymous2: It simply iterates over each point

hi stefan

ReplyDeleteWhat way I can skip the sleeves?

thanks for the help

hi stefan,

ReplyDeleteI am preparing a paper on the recognition of the hand and fingers based on depth data obtained by the sensor kinect within one phase of the article I have taken of their explanations premises that you have raised to determine the center of the palm for that reason it is important for me to recognize their work and contribution to this article. which is important for its approval.

Hi Anonymous

ReplyDeleteWhat exactly is your question?

You can also contact me at info@candescent.ch

- Stefan