Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
The specter of connected car hacking sparks discussion even though there has never been a confirmed case of a malicious attack. The good news is that the most notable car hacks have been research-motivated and conducted by some of the best security experts in the world. Their work offers carmakers insights into reducing the vulnerabilities of their cars. I’ve read every research paper and watched every video presentation related to the five most noteworthy studies on car hacking. Here are the highlights of each study, including the major findings, their suggested actions, and my analysis.
This article is part of the Connected Car Landscape series. You can download a high-resolution version of the landscape from VB Profiles.
First, it is important to understand the basic motives for a hack. Hackers have three main motivations: activism, profit, and challenge.
Activist-motivated attacks, also known as hacktivism, promote a political agenda: usually free speech, human rights, or information technology ethics. Anonymous, whose participants are known for their Guy Fawkes masks, is one of the more famous hacktivist groups in recent years. Attack forms include defacing websites, denial of service attacks (DoS), URL redirecting, and document archiving and distribution (e.g., WikiLeaks). The goal of hacktivism is not to hurt people, and messing with a moving car can do that. So the car is not an ideal vehicle (ahem) for hacktivism.
A high-profile example of profit-motivated cyberattacks are when credit card information is stolen from retail platforms. Other, lesser-known profit-based attacks include botnets and phishing. In the case of botnets, the goal is not to take control of a user’s actions, but to leverage the processing power of the user’s computer. Usually, the end user does not even know that the computer has been turned into a so-called zombie except for the occasional slowdown in performance. This does not necessarily mean that the processing power of cars will become the target for a bitcoin mining network. Mining requires constant connectivity over high bandwidth and persistent electricity, which current connected cars don’t usually offer. In the case of phishing (that is, tricking people into giving sensitive information such as passwords or credit card numbers), the connected car lacks the interaction that would enable the user to compromise information. Admittedly, a connected car does have a distracted user who may approve something just to get to an app service.
I have spent the past few years talking with a variety of people about the security of connected cars. The financial and activism incentives for car hacks are not obvious. People often respond with emotionally fueled fears about safety and security, but they have a hard time coming up with a scenario that doesn’t sound like it was ripped from a movie script (a disaster if the bus speed drops below 50mph? a distraction while stealing bearer bonds in the Nakatomi vault?). Many scenarios don’t even require a connected car to sabotage or steal a car. Even a man-in-the-middle key fob attack does not require the car to be connected in order to unlock the doors.
There has never been a reported incident of a profit- or privacy-motivated attack on a car, but this is where the more likely black hat hacks could happen. As Apple, Google, and Amazon apps make their way onto automotive infotainment platforms, the car platform becomes a starting point from which to steal credit card numbers and identities. Some black hat hackers who find data leaks may collect private data for future use in other attacks. Considering that the car, like the smartphone, has cameras, microphones, and the location information of your daily habits, this could set the stage for a widespread privacy breach.
Most dramatic and scary car security breaches fall under this category, which includes people who are curious about how a technology works, those who want to do something dramatic for notoriety, and those conducting research. Most researchers herein were awarded grant money to find security vulnerabilities in cars. Over a year or so, these experts were able to take control of the car as long as they also had prior physical access to the car to install additional hardware.
Top 5 hacks
Here are my picks for the five most compelling connected car hacks of the past six years.
1. A comprehensive attack of mechanics’ tools, CD players, Bluetooth, and cellular radio
This 2010 study conducted by University of California at San Diego and University of Washington computer scientists demonstrated a wide variety of telematics vulnerabilities. While there were several previous studies that addressed hypothetical issues, this is one of the first that provided experimental results of specific attacks.
Read the full paper here: Experimental Security Analysis of a Modern Automobile
Major research findings
- Once the team was able to physically access the car via the media player, diagnostics port, Bluetooth, or cellular, they were able to completely compromise the car.
- The research team could access the systems by simply calling the car, War Games style.
- Since the telematics system is Unix-based, they were able to get root access and install an IRC channel.
- Industry and government (SAE, USCAR, US DOT) are responding to these findings.
Researchers’ suggested actions
- Use stack cookies to help detect an attack.
- Do not allow inbound calls. Instead, immediately call back a trusted number.
- Arbitrary ECUs should not be able to issue diagnostic and reflashing commands.
- Commands should only be accepted with some validation, and physical access to the car should be required before dangerous commands are executed.
This study revealed some of the more astounding varieties of breaches into the car and the lack of authentication required to access the car systems. The study concludes that detection of anomalies in the systems is a more practical approach to security management than prevention and total lockdown. I agree. It is unrealistic to expect impervious code. Computer security is about mitigating risk.
2. Tire pressure monitor systems
In 2010, researchers at the University of South Carolina and Rutgers University successfully compromised tire-pressure monitoring systems (TPMS), which consist of sensors inside a car’s tires that monitor pressure and a wireless antenna. Using low-end and openly available equipment costing about $1,500, the team was able to track a car’s movements and give false tire pressure readings to the dashboard.
Major research findings
- Reverse engineering in order to spoof and eavesdrop, specifically to track the car location, is possible.
- There was no encryption in the TPMS.
- If hackers flooded the tire pressure ECU with packets, they disabled the ECU and the ability for the alert to display in the dashboard. Even when this happened, however, the car was still driveable.
- They were able to spoof the alert light for no more than 6 seconds.
Researchers’ suggested actions
- Check for conflicting input information. For example, the system reported a low pressure event through the tire pressure ECU, but the PSI reported was normal.
- Use encryption.
This study was one of the first to prove that a remote attack is possible without physical access to the car. At the same time, the researchers noted that this vulnerability is complex to access and manipulate. First, activating location tracking requires the vehicle to pass two checkpoints along the road. Second, the wireless tire sensors communicate infrequently — about once every 60 to 90 seconds. This makes manipulating the system difficult, especially if a vehicle is moving. At highway speeds, the research team could not maintain a warning light spoof beyond 6 seconds. While remote control of an ECU is possible, it is highly limited and does not affect the driveability of the car, which may assuage the general public’s fears.
When I consider the practicality of a malicious attack, I’m skeptical that spoofing alerts is the most compelling method. When your tire pressure gauge alerts you and you do not feel or hear the road in a way that indicates a flat, do you pull over immediately or do you drive to a safe place where you can assess and fix the problem? If you’re like me, you make a mental note to just look at the tires when you get home.
The UCSD/UW study in the first example showed that once the car is compromised, the entire system is compromised. The main actionable item here is that carmakers should use encryption everywhere, since even something as seemingly benign as a tire pressure gauge is a location-based unique identifier that consumers cannot deactivate and that therefore does not have an opt-out option.
3. The DARPA-funded hack of a Toyota Prius and Ford Escape
In 2012, security intelligence experts Dr. Charlie Miller and Chris Valasek received a grant from DARPA to find the vulnerabilities of cars. After a year of research, they were able to hack a 2010 Ford Escape and 2010 Toyota Prius by taking control of the horn, cutting the power steering, and spoofing the GPS, as well as the dashboard displays.
Read the full paper here: Adventures in Automotive Networks and Control Units
Major research findings
- Spoofing is possible.
- It is possible to disable functions of the car by flooding it with arbitrary CAN packets.
(Suggested actions and analysis included in #4 below.)
4. 2014 follow-up research on remote attacks
In September 2014, Miller and Valasek published another paper, “A Survey of Remote Automotive Attack Surfaces,” in which they present system diagrams of 21 different cars and expose the biggest vulnerabilities. They analyzed all of the computer-based systems, including passive anti-theft systems (PATS), Bluetooth, and lane keep assist systems. They assert that attack surfaces and vulnerabilities, while present, are small for most of these systems.
Read the full paper here: A Survey of Remote Automotive Attack Surfaces
Major research findings
- They believe that Bluetooth is one of the biggest and most viable attack points of a car because of its ubiquity.
- In-car apps and web browser technology admit a significant threat, mostly because they offer a familiar attack target that is already understood by those who want to exploit it.
- Their 20 most hackable cars — rated by attack surface, network architecture, and cyber-physical components — span multiple automakers, although there are noticeable recurrences of Land Rover, Toyota (specifically Prius), BMW, and FCA (Jeep, Dodge, Chrysler).
Researchers’ suggested actions
- Since remote attacks happen in multiple stages, they recommend that defense be multi-staged.
- Secure the remote endpoints.
- Make it harder for the attacker to inject CAN messages immediately.
- For attack detection, monitor the rate of ECU messages for a noticeable increase. Miller and Valasek created a device that plugs into the OBD-II port to detect abnormal network traffic patterns and disable all CAN messages, if such patterns are detected.
For the most effective attack points, the researchers required physical access to the car. In the first study, they had to rip open the dashboard and interior in order to take control. In the second study, the biggest and most likely attack point that they cited was via the Bluetooth infotainment system, but they could not find a way to covertly pair a device without user interaction from inside the car. Most likely, this breach would require some Veronica Mars-style social engineering instead of technical prowess. Both studies illustrate that the systems vary from carmaker to carmaker and even among models and years of the same carmaker. This means that you can’t hack once and deploy that hack everywhere. One of the more important takeaways from the second study is that attacks are detectable — so set up detection systems.
5. The $27 car hack from DEF CON 2013
At DEF CON 21 in August 2013, Alberto Garcia Illera and Javier Vasquez Vidal gave a presentation on how they hacked a car using a device that they built for $27.
Major research findings
- The codes are different for each car.
- By flooding the ECU with data, they could disable the ECU.
- If they could get physical access to install a device on the OBD-II port, they could control the car remotely. Since there were neither specifics nor a demonstration, you may consider this point theoretical.
Read their presentation here: Dude, WTF in My Car?
This hack is for curious do-it-yourself engineers who like the challenge. You can spend many hours reverse-engineering the codes or use an ELM 27 + Torque app for about the same money. If you have a larger budget, you can buy the codes from carmakers. However, the codes are not necessarily accurate and they change often — year to year, model to model. For the most part, the breaches and discoveries from this study are applicable to most after-market devices that plug into the OBD-II. If you’re going to plug an after-market dongle into the OBD-II port of your car, make sure that the unit has Bluetooth security features and no default PIN code.
What carmakers and suppliers can do
- Air gap. These experiments proved that once a car’s system is compromised, hackers were able to control other systems within the car. Separating the networks mitigates this vulnerability.
- Perform over-the-air (OTA) updates. Push alerts for updates and make automatic updates an option.
- Tesla fixed their fire issue this way, avoiding the costly conventional recall process.
- In contrast, while GM performs some OTAs via its OnStar system, it went on record at the 2016 Consumer Electronics Show saying that it would never use OTAs for safety-critical features like brakes and steering. This means that in the event of a safety-critical update, GM will issue an expensive and cumbersome recall of millions of vehicles.
- Use encryption.
- Bounties. Challenge hackers to break your security with a bug bounty. A collaborative policy helps get constructive input and fresh eyes on the systems in a positive, proactive environment.
- Contact info. Have a process for receiving information about found security exploits.
- Make it easy for a researcher to contact the company privately about the exploit.
- Have a policy to fix exploits within a specific time period.
- Update the researcher with progress, especially if it will take longer than the normal time period.
- Report the exploits publicly and give the researcher credit for finding it.
Enterprise security that specializes in automotive solutions is a nascent category of the connected car sector. A popular experiment across the above studies was to show that if you flood the ECU with data packets, you can disable the ECU. An attack is detectable by looking for abnormal traffic and data messaging activity on the in-vehicle networks, including the CAN bus. Argus, TowerSec (acquired by Harman), and Karamba offer this anomaly detection and reporting as an automotive cybersecurity solution. Symantec also has an automotive offering as part of its IoT (Internet of Things) portfolio. Each solution differs by its integration point in the car manufacturing process: from factory level to after-market OBD-II plug-ins. Zero-day vulnerabilities refer to those flaws that, once disclosed or exploited by hackers, must be corrected by software publishers or carmakers in “zero days.” Those who can do so are rightly called “zero-day heroes.”
With regard to connected car security, I don’t want to incite fear or encourage dismissiveness. I want to help people understand that with enough time, resources, and expertise, car hacking is possible at various points in the telematics system. Yet telematics systems vary from carmaker to carmaker and even among models and years of the same car. This makes it more difficult to hack once and deploy that hack everywhere.
I used to sit among the customer support group at IronPort, a company that specialized in email and web security appliances. Our phones were constantly flooded by incidents of malware, phishing, and spam attacks. So I know what it looks like to have a constant threat from the web. Seeing that we have yet to have a reported malicious attack, the car may not be the most compelling target. Still, we need to take precautions, use encryption, and have cybersecurity policies in place; securing our personal data and physical safety depends on it. Over the past three years, I have seen carmakers and suppliers take a more proactive approach by having an internal cybersecurity team. This is a fascinating time, as we witness legacy automobile companies transform into Internet of Things mobility companies.
Liz Slocum Jensen is the founder and CEO of Road Rules. You can track her 190+ company landscape here.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more