This project was born out of experimentations with depth mapping and skeleton tracking with Kinect. It employs a few peculiarities and quirks of Kinect to create opportunities for augmenting a live human performance.
The quirk which we explored in Kinect was based on its limitation of having a minimum range for the depth camera. It meant that skeleton tracking would not work on any of these objects which are too close to the Kinect. The way we used it was to create occlusion in path of the user, so that part of his/her body could not be tracked, and correspondingly no RGB pixels were shown on the screen, effectively the part of the user so covered was invisible on the screen.
A second person using his/her hand close to the Kinect could manipulate how much of the other person’s body is visible. Even the user themselves could come close to the camera and in this process have their bodies gradually disappear, or even make portions of their body disappear using their own hand.