In 2017, we are at the dawn of the third great revolution in end-user devices. First came the PC in the 1990s with Windows, and then arrived the smartphone in 2006 with the iPhone. Now, we are on the cusp of the next big shift in end-user experience: the automobile. This shift is shaping up to be more significant than the previous two because it marks a digital path to understanding the physical world.
The automotive business will grow and change dramatically over the next 5 to 15 years, with 2017 setting the stage for that growth. Gartner forecasts a market of 250 million connected cars on the road by 2020. Much of that growth will be new data services and new offerings, rather than the traditional “bended metal and rubber” of the car itself. Consulting firm McKinsey estimates that connected car data, and the new business models that emerge out of it, could be worth $1.5 trillion a year by 2030.
The automobile of 2025 will look quite different than it does today. Cars will essentially become computers on wheels that generate vast amounts of valuable data — data that is only useful if the infrastructure is in place to process it, analyze it, and learn from it. This is why AI will drive the future of the connected car.
What does the car of the future look like?
There are a number of trends shaping the future of cars. One is a massive injection of computing technology, which will fundamentally change vehicle electronics. Beyond stereos, cars will be able to run sophisticated applications. Just as the iPhone demonstrated how a computer can make voice calls, so will the cars of the future demonstrate how computers can move us on a daily basis. Cars will also undergo a revolution in sensors. The cost of sensors is going down at the same time as we are seeing great steps forward in terms of what they can do. As a result, cars will eventually contain dozens of short-range sensors that collect oceans of data about their environment.
In addition, connectivity to the cloud will be a core part of the cars of 2025. The machines will no longer be isolated modules that stay the same for the 20-year life cycle of a vehicle. Instead, they will be able to get new downloads from the cloud. All the sensor data will be sent to the cloud or transmitted peer-to-peer using V2V (vehicle to vehicle) or V2I (vehicle to infrastructure), which will make even short-range data available. This data will be collected to form street-level and even city-level views of traffic. Just as with PCs and phones, the cloud will serve as a central repository of information, applications, and processing.
However, for these trends to bear fruit, we will first need a revolution in software. All of the technology outlined above will generate tremendous amounts of data. Machine learning and artificial intelligence (AI) will be essential to processing it all. Today, we’ve already seen machine learning and AI make great strides in the capacity of computers to make decisions and understand images. This is just the beginning.
According to a report from IHS Technology, the number of AI systems in vehicles will jump from 7 million in 2015 to 122 million by 2025. AI will become standard and, in doing so, will transform the way humans interact with their cars and vice versa. One way is through infotainment and smarter interaction. AI will power features like voice and gesture recognition, driver monitoring, virtual assistance, and natural language understanding (NLU). Drivers will be able to speak to their cars and have them respond to, and even anticipate, needs.
AI will also be essential in making advanced driver assistance systems (ADAS) systems a mainstream reality. ADAS and autonomous vehicles require camera-based machine vision systems, radar-based detection units, driver condition evaluation, and sensor fusion engine control units (ECU) to work. According to IHS, “deep learning” is the key to fully autonomous vehicles. It’s what allows them to detect and recognize objects, predict actions, adapt to new road conditions, and more.
How do we get there?
The road to fully autonomous cars will be long, and we are just getting started. While 2016 showed what is possible, we are still years away from reaching the fully autonomous Level 4 car — as defined by the National Highway Traffic Safety Administration (NHTSA) — for the mass market.
In 2017, the industry will hit important milestones as we build critical infrastructure for data collection and create detailed real-time maps for ADAS. Today there are two choices for accomplishing this goal, and we will see a third major alternative emerge this year.
One choice is to deploy highly instrumented cars that take images and record positions of static objects. This so-called “millimeter precision” is needed for precise lane information and directions. This is an extremely expensive option in terms of dollars and time, and it also requires a commitment to ongoing updates so that data does not become stale.
The second choice is to deploy semi-autonomous vehicles to collect data. This requires a new generation of cars with advanced sensors, but few cars will have these sensors in place in 2017.
The third option, coming this year, is to use new technology to collect data from the other, non-autonomous cars already on the road. For instance, detecting abrupt steering changes from a number of cars at the same location could indicate an obstacle. Noticing when wheels are slipping or windshield wipers are on will provide proximity-relevant notification of micro-weather. The power of machine learning is that all of this data can inform the ADAS systems of next-generation cars, as well as provide better models for the cars of the future.
The technologies to emerge in 2017 may not realize a fully autonomous vehicle, but they are critically important for laying the groundwork of the future.
John Ludwig is the president of the AI Group at Xevo, a driving automation company.