It currently only works with OpenNI.
When I have collected a first batch of data I'll define an accuracy metric that compares the human set points to those that the algoritm finds.
How to collect test data:
1. Click "Start Depth Source", this will start the depth data stream and show it on the screen.
2. Click "Capture Frame" to capture a single frame out of the depth stream (or click "Capture Frame Delayed", then the frame is captured after 3 seconds).
3. Click one of the frames you captured in the frames bar
4. Add as many hands as are presented to the device
5. Mark the center of the palm by clicking the button under the corresponding hand box and then on the image at the right location.
6. Mark the fingers by clicking the button and then the fingers on the image. Try to mark the point on the edge as shown in the image (but make sure it's still on the hand, look at the z position).
7. Save the frame by clicking "Save Test Frame..."
If you would like to contribute to the test data collection, please send me your captured frames in a zip file to firstname.lastname@example.org
I have to add the following:
Please note! By sending me data generated with the test data collector you agree to transfer your copyright of that data to me. This allows me to publish it in a test data collection under any license. This is necessary so I can add it to CodePlex and / or to create a commercial version somewhen in the future (no plans yet).
The data contains no other information than the depth frame and the annotated hand data (no color image). No data about you or your computer is collected.
If you don't agree, please do not use the test data collector.