We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
At this year’s Consumer Electronics Show in Las Vegas I encountered the usual collection of ever-larger flat-screen TVs, voice-activated home appliances, and wearable tech as far as the eye could see. But to my mind, CES 2019 may ultimately be remembered as the year computer vision products went mainstream.
From skin care products and smart refrigerators to underwater drones and emotion-sensing robots, products employing cameras with sophisticated image recognition capabilities were everywhere on the show floor, as well as rolling down the Las Vegas strip. Here are some of the more notable announcements that caught my eye.
Wanted: No drivers
Self-driving cars tend to be the most dramatic computer vision technologies on display at CES, and this year was no exception. BMW, Mercedes-Benz, and Toyota all unveiled new driverless concept cars. But what struck me was how much closer autonomous vehicles are to becoming a roadway reality.
For example, at CES 2018 Honda introduced a self-driving all-terrain concept vehicle designed for search and rescue and other hazardous operations. This year the concept car is now a real product called the Autonomous Work Vehicle, currently being field tested for use in farming and fire fighting.
Transdev, the global public transit operator based in Paris, showed an autonomous shuttle now being piloted on the streets of Rouen, France, and Babcock Ranch, Florida.
Among the many vendors displaying enhanced Light Detection and Ranging (LiDAR) technology, InnovizOne stood out to me the most. This solid-state sensor — which won a CES Best of Innovation award — can map 3D objects at distances of up to 250 meters and will be available in BMW’s autonomous vehicles starting in 2021.
Having multiple computer vision systems working in concert is what makes driverless cars possible. Autonomous driving requires LiDAR, radar, and cameras working together. It’s then up to the car’s AI to combine those images, apply mapping and other sensor data, and employ software rules to operate the car safely and autonomously.
Valeo, a Global 2000 maker of connected mobility solutions, also announced several new products at the show, including Drive4U Remote, which allows an operator hundreds of miles away to take control of a driverless car when traffic conditions warrant.
Secure inside and out
Besides keeping cars safe on the road, computer vision can also be used to keep cars secure while parked.
One of the more interesting CV vendors at the show was Israeli startup UVeye, which deploys cameras and AI for vehicle inspection. The technology started as a way to identify explosive devices embedded in a car’s undercarriage. UVeye’s technology is already in use at embassies and consulates in the Middle East and Africa.
But as vehicles drive over the scanner, it can also identify defects and anomalies, even in cars that have been on the road for years and are caked with mud and grease, thanks to advanced deep learning.
Other UVeye computer vision products can scan the entire car and detect scratches, dents, leaks, rust, or low tire pressure. These devices have been deployed on automotive assembly lines and in rental car agencies.
Other products, like Owl Camera’s HD dashboard cam, can help secure your car from the inside out. Unlike most dashboard cams, the Owl Cam points at the interior of your vehicle. When its algorithms detect that your car is being broken into, the cam captures thieves in the act and streams the video to your phone. If you’re involved in an accident, the Owl Cam will alert live operators, who attempt to communicate with you and can call first responders if needed.
The most impressive — and biggest — product I saw at the show was John Deere’s semi-autonomous combine harvester, a massive $500,000 machine that can harvest grains at a rate of 15 acres an hour. Although the 20-ton harvester is self steering and can automatically find the optimal route through the rows of crops, it still requires a human driver to keep it from running over obstacles.
The combine uses cameras and AI to analyze grain quality as it’s being harvested. When a grain sample shows too much trash, which can lower the price a farmer can command for the crop, the harvester automatically adjusts the threshing process, using more air to blow the trash away.
I was also impressed by John Deere’s Blue River See and Spray machine. It uses computer vision to identify crops, which allows it to spray herbicides only on the weeds. That can reduce the amount of herbicides needed by 90 percent. Training machines to identify thousands of plants must have been a huge undertaking, but advanced prototypes of the sprayers are currently being tested on a few thousand acres.
Skin in the game
There were hundreds of other vendors deploying cameras and AI-driven image recognition across a range of products, but a few stood apart.
For the first time ever, consumer packaged goods conglomerate Procter & Gamble had a booth at CES. While P&G showed off a lot of cool tech — like self-heating razors and auto-sensing fragrance dispensers — to me, the star of the show was its Opte Precision Skincare System. This handheld device uses an AI-driven camera to identify age spots, freckles, and other blemishes, then applies microscopic amounts of serum to erase those spots. (And yes, it really works.)
Three years ago at CES, Samsung unveiled a smart fridge that used internal cameras to take snapshots of its contents and send the images to an app on your phone. This year, the South Korean electronics giant added image recognition to its Family Hub appliance. When you select View Inside on the Hub’s 21.5-inch touchscreen, it not only labels the food on each shelf, it also suggests recipes that can use the ingredients it has identified. That was pretty cool.
Several companies introduced “smart” motorized suitcases that can recognize their owners and follow them around the airport. There were underwater drones that use computer vision to navigate. And it seemed like I couldn’t walk more than 20 feet without stumbling over some kind of robot.
One of the most intriguing was the “emo-robot,” a collaboration between Russian robot manufacturer Promobot and Neurodata Lab. This humanoid-shaped machine analyzes your facial expressions to determine whether you’re happy, angry, sad, disgusted, or surprised, then responds appropriately.
The bot was demonstrating a subset of emotion AI, which analyzes eye movement, voice tones, heart and respiration rates, and gestures as well as facial expressions. The ultimate goal is to develop “emotion as a service,” which can then be applied to a wide range of customer-facing applications.
What was missing
It is inarguable that in this age of neural networks computer vision is impacting a far broader range of consumer-facing products year-over-year. But one thing that struck me as I wandered between booths: There was a noticeable disconnect between the technological focus seen in demonstrations at CES and that in the publications featured at CVPR, the leading conference on computer vision and pattern recognition.
Generative techniques are all the rage among the academics at CVPR, but they were nowhere to be found in the commercial applications I saw at CES. Generative Adversarial Networks (GANs) have recently led us down a dark road of deepfakes and malicious photorealistic hallucination. This topic is most certainly on the minds of major tech companies and will only gain more prominence as our generative architectures become stronger and faster.
So, as content authenticity creeps further into the media’s spotlight, at CES 2020 I’d expect to see more computer vision applications that generate artificial content, as well as those that sniff it out.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.