The current version of my hand tracking program allows me to grab an image with the right hand and move it around. Grabbing with the left hand while resting the right hand on the image will trigger the "rotate and scale" mode.
The distance change between the two cluster centers (hand points) is used to scale the image, changing the angle between the points will rotate the image accordingly.
Currently all objects closer than a certain threshold are tracked. The next step would be to improve the hand recognition, so hands could also be tracked outside this virtual plane. And the algorithm could be refined to track the fingers independently.
Microsoft has announced that they will release a SDK for the Kinect somewhen this spring (read more). Unfortunately there are not yet many details known, but I guess it will 'only' provide access to the Kinect sensor data and not contain any ready-to-use hand / full body tracking.