Presented by Infineon
Imaging is important. Mobile phone companies compete fiercely on camera development, knowing that a better image sensor, or more powerful algorithms to manipulate its output, are among the few product differentiators left in a rapidly maturing market. The resultant progress has been impressive, especially in edge cases such as low-light shooting, selective focus, and video capabilities.
These 2D sensors have also achieved remarkable things in enabling interactions with the 3D world, such as the recognition and classification of objects, and a limited version of augmented reality (AR) that shows what could be possible, if only…
The ‘if only’ here is, of course, ‘if only the camera had real depth data, rather than trying to synthesize it through analysis of a 2D image.’ Real 3D data would, literally, bring another dimension to these applications as well as providing the impetus for more imaging-based innovations.
Fortunately, a new generation of 3D time-of-flight (ToF) sensors has emerged that can offer accurate distance measurements out to 10m, as well as high-resolution mapping of 3D spaces over distances of up to 5m. The sensors exist, so what can we do with them?
Let’s start with the prosaic, such as mapping 3D spaces for realtors, surveyors, and architects. Realtors already use sophisticated, tripod-mounted 3D mapping systems, but wouldn’t it be simpler if they could just whip out a phone and do the same with a walkthrough? Perhaps drones could be fitted with 3D ToF sensors so that they could do autonomous fly-throughs, using their long-range capabilities to navigate the building and then moving in, section by section, to develop detailed volumetric data of the whole space.
The same approach could be applied to ground-based systems, such as service robots and domestic helpers like Amazon’s recently announced Astro. Depth data is the key to understand your surroundings. It allows it to automatically separate between foreground and background objects, helping novel AIs segment their environment, without being fooled by a picture. 3D ToF sensors are also quickly being adopted in the pet industry, such as pet doors that only open when it detects your pet, preventing wild animals from entering the house.
In general, depth information provides an additional tool for object recognition besides the illumination gradient between pixels which might suffer from low contrast and/or low light conditions.
What about health monitoring? After what feels like years of Zoom meetings, how about a ToF sensor integrated into a webcam to gently nudge you about your posture or check that you are breathing correctly? It could also remind you how long you have been sitting and suggest a quick break, and warn you when you start peering at the screen from ever shorter distances as the day wears on. This seems like a ‘nice to have’ for white-collar workers, but could be a real boon to the health, safety, and productivity of people working in manufacturing assembly jobs that demand constant close working.
The capability could be expanded into leisure activities such as yoga, dance, and martial-arts training, using depth data from 3D sensors to capture how a user is moving their limbs. Algorithms could then be used to advise users on how to achieve the ideal form for their poses. In online exercise classes, instructors could be shown a stick-man model of each student, enabling them to get a better sense of how they could improve their kinematics. Given that we live in an age in which ‘data is the new oil’, such body-mapping applications could also create large datasets that describe how people of different sizes and shapes move in various contexts.
This sensor-enabled study of body kinematics won’t be limited to indoor exercise. It could be applied to outdoor sports, because the ToF sensors have been designed to be especially good at rejecting the confusing effects of bright sunlight. The sensors also encode the infrared light they use to measure distances, so multiple cameras can work together to get a fuller picture of an athlete’s movements without confusing each other.
Better depth data could also be appealing to the retail sector, which has been working for years to bring as much of the physical shopping experience as possible online. For example, Ikea offers an AR application that enables users to place virtual furniture into their home environments. It works, to an extent, but has issues understanding the distance to real-world objects. This leads to virtual objects overlaying real objects, even if the user wants them placed behind the real object. High-quality depth information will help AR to evolve beyond the state of uncanny valley and make the experience more convincing and immersive.
The Infineon REAL3 TOF sensor uses modulated light to measure the depth of a scene.
The fashion retail sector has been experimenting for years with in-store or app-based sizing systems that give customers an up-to-date sense of their measurements. Some of these have been quite elaborate: a Japanese company called Zozo tried sending shoppers a skin suit covered with coded markers to wear during a home sizing session. But wouldn’t it be simpler to use the depth information from a 3D sensor to take the measurements instead?
The beauty industry is also trying to bring its physical shopping experience online, with apps that enable touch-free selection of products, and virtual ways to see how they would look when applied. Again, the depth information provided by a 3D sensor could give these apps a much better understanding of the shape of the user’s face, which would then be used to provide a more accurate image of how the product would look when applied.
In a blending of the real and the virtual that is squarely aimed at social media users and online influencers, companies such as Dress X are providing a marketplace for ‘digital’ clothing that users can ‘wear’ in online images. For a monthly subscription fee, members can use AR to ‘try on’ items of digital clothing over a live image of themselves.
If a user likes an item, they can then pay to have it Photoshopped onto an image of themselves. Since digital clothing doesn’t have to obey the laws of physics, the effects can be stunning. But the AR implementation doesn’t handle depth as well as it could with access to true 3D data, and the Photoshopping process looks as if it could be improved in the same way. Once the technology has improved it may be that, for some fashion brands at least, physical runway fashion shows will become a thing of the past.
There are myriad other applications that can be enabled by good-quality depth data derived from a ToF sensor, from improving the security of facial-recognition based access controls to monitoring car drivers for drowsiness. The question is, how will you ensure that your next product or service can take advantage of Infineon’s strength in depth?
Learn more how 3D data and ToF 3D image sensors are impacting a wide range of markets and applications here.
Edmund Neo is Senior Product Manager of Power and Sensor Systems at Infineon Technologies. Rolf Weber is Principal Engineer, Time of Flight Applications at Infineon Technologies.