Gaming execs: Join 180 select leaders
from King, Glu, Rovio, Unity, Facebook, and more to plan your path to global domination in 2015. GamesBeat Summit
is invite-only -- apply here
. Ticket prices increase
on April 3rd!
Get ready for the next generation of Project Tango capabilities in smartphones and possibly wearables.
That foreseeable future emerges from the announcement today of a next-gen vision processor unit (VPU) from Movidius, called Myriad 2. The San Mateo, California company’s Myriad 1 is part of Google’s Project Tango, an experimental project that imparts 3D vision and mapping capability to a smartphone.
The new VPU makes “possible 40 years of computer vision, in a mobile device,” CEO Remi El-Ouazzane told VentureBeat.
The programmable, high-performance, ultra-low power system-on-a-chip Myriad 2, the company said, can deliver 20 times more processing efficiency in computations per watt of power consumed than the Myriad 1.
The 28-nanometer tech in this chip is designed to achieve two trillion 16-bit operations per second on an average of less than 500 milliwatts, supporting as many as six full high-definition, 60-frames-per-second camera inputs at the same time. It can process 13 megapixel images at 48 frames-per-second or 4K resolution at 60 fps, and supports processing for scene intelligence.
The Myriad Development Kit provides software libraries for computer vision and image signal processing and a reference application for stereo depth extraction.
This opens up a wide range of use cases that have not been feasible in a mobile device, El-Ouazzane said.
The chip can “help mobile devices cross the chasm from point-and-shoot [cameras] to SLR [single-lens reflex],” he said, offering “optical zoom quality, extraordinary pictures in dark environments, and hyperfast [capture].”
Building on that capability, El-Ouazzane told us, mobile devices with the Myriad 2 can be used for immersive gaming, augmented reality, and the ability to “recognize objects.”
Imagine “new wearable cameras with 360-degree views,” he said, or ones that can “simultaneously locate and map [their environments], and autonomously navigate in space.”
In other words, he told us, “recognizing a tree” and helping you navigate around it.
Such performance can also help “a slew of innovations for social robots that can recognize your mood, recognize objects.”
When will we start seeing these kinds of capabilities?
“In the coming 12 months,” El-Ouazzane said. “We are talking about flagship devices” from leading companies.
Google's innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob... read more »
Powered by VBProfiles
VentureBeat is studying email marketing
, and we’ll share the data with you.