A group of MIT researchers claims to have made a significant leap in gesture-controlled computing, due to a new kind of LCD screen configuration they describe as a “lenseless camera.” Leveraging recent advances in LCD technology, the team’s so-called BiDirectional or BiDi Screen display is capable of both capturing and displaying images on a very thin screen, creating the possibility for highly sensitive gesture control in devices as small as smart phones.
They debuted a working prototype on Saturday in Yokohama at the Asian conference of the Special Interest Group on Computer Graphics and Interactive Techniques (Siggraph).
The BiDi Screen can shift back and forth between capture and display modes at a speed that renders the activity imperceptible to users. The distinctive feature of the design is an array of optical sensors placed just behind the conventional LCD display surface. The LCD surface in its capture mode displays a pattern of black and white rectangles, allowing light to pass through to the sensors. Data from multiple views is correlated in order to judge the depth, distance, and motion of objects placed in front of the display.
Popularized by the Nintendo Wii, gesture control allows users to manipulate onscreen objects by means of body movement. But the Wii requires a peripheral device in order to function. In recent months, companies such as Softkinetic and Canesta have announced advances in the development of gesture control without the need for a handset or peripheral. Softkinetic, a Belgian software company, recently launched the body-controlled game Silhouette. Canesta, based in Sunnyvale, Calif., signed a deal with Quanta Computer in October to provide motion-sensing chips for a new line of gesture-controlled computers.
Microsoft’s release of the Natal system for Xbox 360 in fall 2010 is set to bring peripheral-free gesture control to the video game market. The MIT announcement specifically references Natal, claiming that because Natal’s technology depends on small cameras embedded in the display, it doesn’t do a good job of tracking movement at short distances, and therefore is not a viable technology for combining touchscreen with gesture.
The MIT team claims that the BiDi Screen can provide the smooth transition between touchscreen and gesture that will be necessary for small devices.
I contacted research scientist Henry Holtzman, a member of the four-person team, which also includes PhD candidate Matthew Hirsch, Professor Ramesh Raskar, and visiting researcher Douglas Lanman. I asked Holtzman when the technology will be market-ready and whom, if anyone, they’ve talked to about licensing. His response indicates it could be pretty far off:
“With fabrication facilities costing billions of dollars to construct, innovations in LCD technology usually take many years—sometimes decades—to come to market, even after successful demonstration. Fortunately for us, one of the key enablers for our gestural interface, a wide-area, thin, transparent photo sensor, is already on the LCD manufacturers’ road map for making inexpensive optical multitouch displays.
Our innovation is a major step forward in terms of enabling new capabilities, such as non-touch, 3D gesture recognition, but it takes advantage of the optical multitouch engineering work already in progress. A concerted effort could yield a product on the market in several years.
Licensing has not yet been on our minds—up to now we have been concerned with the fundamental idea, the science behind it, and the engineering to show a prototype. This week at Siggraph Asia is the first time we’re showing the BiDi Screen to the public.”
The cautious response might have something to do with the fact that, according to their announcement, the team does not yet have access to the brand-new LCDs with built-in sensors that are needed to make their invention work. The prototype will use a camera configuration instead, but supposedly will demonstrate that the design is conceptually sound.
Even if the BiDi Screen isn’t quite ready for prime time, its potential may be significant. Professor Raskar says that in the course of computing history, “intelligence moved from the mainframe to the desktop to the mobile device, and now it’s moving into the screen.”