A novel attack method targeting self-driving car systems has been discovered that tricks AI into disregarding traffic signs on the roadside. This technique exploits the rolling shutter of the car’s camera, using a light-emitting diode (LED) to deceive the vehicle’s AI. This attack relies on the camera-based computer vision, which is crucial for autonomous vehicles to perceive their surroundings accurately.
The attack capitalizes on the fact that the CMOS cameras, which are commonly used in cars, can have their color perception altered by rapidly changing light emitted by fast-flashing LEDs. This phenomenon is similar to how humans’ color visibility changes when exposed to rapidly flashing lights. Unlike charge coupled devices (CCDs), which capture the entire picture frame at once, CMOS cameras use an electronic shutter that captures the image line by line. This means that the lines of the CMOS picture are captured at different times, making them susceptible to distortion caused by rapidly changing light.
Despite this drawback, CMOS cameras are widely adopted in various types of cameras, including those in vehicles, due to their affordability and ability to provide a good balance between image quality and cost. Notably, Tesla and other vehicle manufacturers utilize CMOS cameras in their vehicles.
In a recent study, researchers highlighted this vulnerability as a potential risk for self-driving cars. Attackers can manipulate the input light source to create different colored stripes on the captured image, thereby misleading the computer vision system’s interpretation of the image. The researchers conducted experiments using LEDs to create a flickering light environment and observed how this disrupted object detection when a laser was directed at the camera lens.
Unlike previous studies that focused on single-frame tests, this study aimed to simulate a continuous and stable attack in a controlled environment. To achieve this, the researchers fired an LED in close proximity to a traffic sign, causing controlled and fluctuating light to be projected onto the sign. Although the fluctuating frequency was invisible to the human eye, it introduced colored stripes on the camera, leading to misinterpretation of the traffic sign.
For the attack to effectively misguide the self-driving system and result in incorrect decisions, the same results need to be consistent across multiple consecutive frames. If the attack is unstable, the system may detect the anomalies and activate fail-safe measures, such as switching to manual driving mode.