Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot

Introduction

With the rise of Artificial Intelligence, Advanced Driver Assistance System (ADAS) related technologies are under rapid development in the vehicle industry. Meanwhile, the security and safety of ADAS have also received extensive attention.

As a world-leading security research team, Tencent Keen Security Lab has been conducting continuous research in this area. At the Black Hat USA 2018 security conference, Keen Lab presented the first ever demonstration to remotely compromise the Autopilot[1] system on a Tesla Model S (The attack chain has been fixed immediately after we reported to Tesla)[2].

In later security research toward ADAS technologies, Keen Lab is focusing on areas like the AI model’s security of visual perception system, and architecture security of Autopilot system. Through deep experimental research on Tesla Autopilot, we acquired the following three achievements.

Research Findings

Auto-wipers Vision Recognition Flaw

Tesla Autopilot can identify the wet weather through image recognition technology, and then turn on the wipers if necessary. Based on our research, with an adversarial example craftily generated in the physical world, the system will be interfered and return an “improper” result, then turn on the wipers.

Figure 1. Neural Network behind the Tesla Autopilot Auto-wipers

Lane Recognition Flaw

Tesla Autopilot recognizes lanes and assists control by identifying road traffic markings. Based on the research, we proved that by placing interference stickers on the road, the Autopilot system will capture these information and make an abnormal judgement, which causes the vehicle to enter into the reverse lane.

This article has been indexed from Keen Security Lab Blog

Read the original article:

Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot