The Kinect has been hugely popular as a gaming interface. It scans, with great accuracy, the movements of a player’s body and translates them into character movements in the game. It’s even been used in telemedicine applications to scan the body movements of people doing physical therapy at home.
With this invention, two Kinects scan the dimensions of an object, then send the dimensional data to another device that recreates the shape in the real world. To do this, the other device uses an array of air blowers that blow streams of air at varying strengths on a piece of malleable material (like stretch fabric or elastomer) to form the shape.
If the Kinects were scanning a face, the scan data might instruct the blower directly behind the nose to blow the hardest, as the tip of the nose would be located closest to the Kinects’ sensors, relative to other facial features. The air blowers directly behind the ear might blow a slightly weaker blast.
There’s no telling what the technology will be used for. It might be possible to send the Kinect scan data in real time across a network to a rendering device somewhere far away. In a world where people increasingly talk to each other’s image on a screen, this might allow for a form of communication that feels more like talking “face-to-face.”
Research provided by legal technology firm SmartUp Legal.