EU report investigates AI cybersecurity risks in autonomous vehicles

LinkedIn +

The JRC (Joint Research Centre) and the European Union Agency for Cybersecurity (ENISA) have investigated cybersecurity risks connected to artificial intelligence (AI) in autonomous vehicles and provided recommendations for mitigating them.

“It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks,” explained JRC director-general Stephen Quest. “To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving.”

“When an insecure autonomous vehicle crosses the border of an EU member state, so do its vulnerabilities. Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads,” added EU Agency for Cybersecurity executive director Juhan Lepassaar.

The AI systems of an autonomous vehicle are working non-stop to recognize traffic signs and road markings, to detect vehicles and estimate their speed, and to plan the path ahead. Apart from unintentional threats such as sudden malfunctions, these systems are vulnerable to intentional attacks that have the specific aim to interfere with the AI system and to disrupt safety-critical functions. Adding paint on the road to misguide the navigation, or stickers on a stop sign to prevent its recognition are examples of such attacks. These alterations can lead to the AI system wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.

In order to improve the AI security in autonomous vehicles, the report contains several recommendations, one of which is that security assessments of AI components are performed regularly throughout their lifecycle. This systematic validation of AI models and data is essential to ensure that the vehicle always behaves correctly when faced with unexpected situations or malicious attacks.

Another recommendation is that continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving.

Finally, the report recommends that proper AI security policies and security culture should inform the automotive supply chain. The automotive industry should embrace a security by design approach in the development and deployment of AI functionalities, where cybersecurity becomes a central element of the digital design from the beginning.

Share this story:

About Author

mm

Lawrence has been covering engineering subjects – with a focus on motorsport technology – since 2007 and has edited and contributed to a variety of international titles. Currently, he is responsible for content across UKI Media & Events' portfolio of websites while also writing for the company's print titles.

Comments are closed.