Every few months a new innovation in physical interaction with virtual objects comes along and the Minority Report comparisons begin. But (T)ether, a new project by a group of students at the MIT Media lab, is one of the first that actually made me believe it.
After watching their demo video (below), you’ll believe it too.
Briefly, what (T)ether does is allow individuals and groups to interact with virtual objects in real-time, simultaneously, creating a shared virtual space in which to build, create, and edit objects. Sounds simple? The implications are profound.
The (T)ether user interface enables manipulation of virtual objects spatially, with gestures, rather than through actual physical contact with a display device. It’s easier to show than to tell:
That difference is critical, because it enables both interaction with large objects or collections of objects …
… and easy collaboration with other connected devices and users:
And it’s this collaborative aspect that’s most exciting. As the group, composed of Matthew Blackshaw, Dávid Lakatos, Hiroshi Ishii, and Ken Perlin, post on their project page, “multiple people can edit the same virtual environment.”
As you can see around the four-minute mark in the video below, this enables powerful collaboration. Here’s a screen capture of the video showing two of the team members building and editing a virtual architecture together:
Currently the team is using iPads as their windows on the virtual world. However, imagine a Google Glass version, which seamlessly overlays the virtual onto the real via smart glasses, eliminating the need to physically hold up a device.
Go a step farther and envision increased fidelity, resolution, and processing power, and you have a solution that would enable an aircraft mechanic to virtually disassemble an ailing jet engine. Or enable an automotive engineer to visually create a model of how a new engine should fit together. The possibilities are endless.
Here’s the project video in its entirety: