Sensors were the backbone of so many products at the Consumer Electronics Show in Las Vegas. You could argue, then, that making them smaller, cheaper, and better are lynchpins for the consumer-electronics ecosystem. And PrimeSense hopes to lead the way by making 3D sensing devices that are more efficient, affordable, and smaller.
PrimeSense made the 3D-depth camera sensor chips in Microsoft’s Kinect motion-sensing system for the Xbox 360 in 2010. That became a huge hit, but now PrimeSense’s next-generation 3D sensors, dubbed Capri, can fit into devices that are 10 times smaller than the current generation of 3D-sensor devices. Capri is so tiny that the finished board is smaller than a stick of chewing gum (pictured left).
“It’s the world’s smallest 3D sensing device,” said Inon Beracha (pictured), chief executive of Tel Aviv-based PrimeSense, in an interview with VentureBeat. “Our second-generation product is going to be embedded in many more devices, from TVs to monitors.”
The new sensor works like the previous one, sensing both depth and color in a three-dimensional space. It can identify people and their body properties, movements, and gestures. It can distinguish objects such as furniture and sense the location of the walls and floor. It uses near-infrared light, which is invisible to the human eye. It sends the light out and then uses an image sensor to read the light that returns from the 3D space to the camera.
With Capri, PrimeSense used more advanced manufacturing technology to make the chip smaller, and it improved its algorithms, which include multi-modal 3D sensing. The middleware, or the software that interprets the 3D sensing data, used to run on a powerful computer or game console. Now the processing takes place on the tiny ARM-based processor on the Capri board.
“It’s going to be much easier to embed machine vision in everyday consumer devices,” Beracha said.
The new generation will work at both short ranges or long ranges, as needed. PrimeSense will take that chip, mount it on a small board and sell it for applications in consumer electronics devices such as PCs, all–in-one PCs, tablets, laptops, mobile phones, TVs, consumer robotics, and more. Samples will be ready by mid-2013. The sensing performance of the device is as accurate as Kinect’s is in controlling 3D games.
Beracha showed off a few applications at CES. In a retail application from Shop Perception, the 3D sensors worked inside store shelves (pictured right). The cameras sensed what objects shoppers reached for on the shelves, how long they inspected them, and whether or not they put them in their shopping carts. Such data generates a lot more analytics information for retailers about a store display and whether it is working or not, Beracha said.
“It makes the shopping experience more interactive and the retailer understands the dynamics of the shelf better,” he said. “It’s much more intelligent than what happens today.”
Another cool application, pictured left, is Matterport. In beta testing, Matterport uses a 3D-scanner camera to sense the dimensions of a room. You can then upload the images to the cloud. Matterport’s software then visually recreates a digital model of the interior space of your home. You can explore, measure, and share your space using a Web viewer.
“You can bring something into your home, and see what it looks like before you buy it,” he said. “You can remodel your home, and see what it looks like before you really do it.”
Beracha also showed how shopping malls could use 3D sensors in mall directories. You can swipe your hand to change an image on a screen and then engage with it as a touchscreen. He also showed an application (pictured right) from Ayotle, which is making an interactive video projector. You can insert your own image into a two-dimensional projected image and play around with it, as a kind of a work of art.
For sure, some of these might be stretching the use of 3D sensors beyond practicality. But they’re imaginative. And, Beracha says, they’re increasingly affordable.