VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.”
The range of AI systems and sensors needed to power autonomous vehicles increases the attack surface area, according to the report. To address vulnerabilities, its authors say policymakers and businesses will need to develop a security culture across the automotive supply chain, including for third-party providers. The report urges car manufacturers to take steps to mitigate security risks by thinking of the creation of machine learning systems as part of the automotive industry supply chain.
The report focuses on cybersecurity attacks with adversarial machine learning that carries the risk of malicious attacks undetectable to humans. The report also finds that the use of machine learning in cars will require a continuous review of systems to ensure they haven’t been altered in a malicious way.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
“AI cybersecurity cannot just be an afterthought where security controls are implemented as add-ons and defense strategies are of reactive nature,” the paper reads. “This is especially true for AI systems that are usually designed by computer scientists and further implemented and integrated by engineers. AI systems should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate.”
Scenarios presented in the report include the possibility of attacks on motion planning and decision-making algorithms and spoofing, like the kind that can fool an autonomous vehicle into “recognizing” cars, people, or walls that don’t exist.
In the past few years, a number of studies have shown that physical perturbations can fool autonomous vehicle systems with little effort. In 2017, researchers used spray paint or stickers on a stop sign to fool an autonomous vehicle into misidentifying the sign as a speed limit sign. In 2019, Tencent security researchers used stickers to make Tesla’s Autopilot swerve into the wrong lane. And researchers demonstrated last year that they could lead an autonomous vehicle system to quickly accelerate from 35 mph to 85 mph by strategically placing a few pieces of tape on the road.
The report was coauthored by the Joint Research Centre, a science and tech advisor to the European Commission. Weeks ago, ENISA released a separate report detailing cybersecurity challenges created by artificial intelligence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.