Get ready for the next generation of Project Tango capabilities in smartphones and possibly wearables.
That foreseeable future emerges from the announcement today of a next-gen vision processor unit (VPU) from Movidius, called Myriad 2. The San Mateo, California company’s Myriad 1 is part of Google’s Project Tango, an experimental project that imparts 3D vision and mapping capability to a smartphone.
The new VPU makes “possible 40 years of computer vision, in a mobile device,” CEO Remi El-Ouazzane told VentureBeat.
The programmable, high-performance, ultra-low power system-on-a-chip Myriad 2, the company said, can deliver 20 times more processing efficiency in computations per watt of power consumed than the Myriad 1.
The 28-nanometer tech in this chip is designed to achieve two trillion 16-bit operations per second on an average of less than 500 milliwatts, supporting as many as six full high-definition, 60-frames-per-second camera inputs at the same time. It can process 13 megapixel images at 48 frames-per-second or 4K resolution at 60 fps, and supports processing for scene intelligence.
The Myriad Development Kit provides software libraries for computer vision and image signal processing and a reference application for stereo depth extraction.
This opens up a wide range of use cases that have not been feasible in a mobile device, El-Ouazzane said.
The chip can “help mobile devices cross the chasm from point-and-shoot [cameras] to SLR [single-lens reflex],” he said, offering “optical zoom quality, extraordinary pictures in dark environments, and hyperfast [capture].”
Building on that capability, El-Ouazzane told us, mobile devices with the Myriad 2 can be used for immersive gaming, augmented reality, and the ability to “recognize objects.”
Imagine “new wearable cameras with 360-degree views,” he said, or ones that can “simultaneously locate and map [their environments], and autonomously navigate in space.”
In other words, he told us, “recognizing a tree” and helping you navigate around it.
Such performance can also help “a slew of innovations for social robots that can recognize your mood, recognize objects.”
When will we start seeing these kinds of capabilities?
“In the coming 12 months,” El-Ouazzane said. “We are talking about flagship devices” from leading companies.