As technology becomes more integrated into our daily lives, the number of digital vulnerabilities we face sharply increases. Cyberattacks are notoriously common, often targeting small businesses, due to their vulnerability and value, and they’re only going to become more common as the sheer number of hackable devices grows.
However, some tech experts claim that our advancing tech is going to make us safer. For example, proponents of self-driving cars are quick to note that more than 90 percent of auto accidents are caused by human error; if cars were entirely self-regulating, and immune to human errors, they could save tens of thousands of lives every year.
But there’s another potential problem with self-driving vehicles that isn’t present with a human-driven car; if they’re vulnerable in the same way that small businesses, IoT devices, and unsecured networks are, they could be compromised, and the driver, passengers, and bystanders around the vehicle could all suffer the consequences.
So just how vulnerable are self-driving vehicles?
Current Reports
So far, there haven’t been any reports of hackers gaining control of self-driving vehicles, but that’s not necessarily comforting. In the early days of email and e-Commerce, there were few cyberattacks too—and that’s when these platforms were at their most vulnerable. Only after a round of malicious attacks did companies like Microsoft take action.
Of course, self-driving cars are being developed in a different landscape than email and the first e-Commerce websites; developers understand the inherent vulnerabilities that technology could introduce, and self-driving car experts often list safety as their biggest concern.
Types of Attacks
Part of the problem with self-driving cars is the sheer number of potential ways someone could gain access to them, or otherwise interfere with their performance:
- Driver authentication. Most self-driving vehicles will be equipped with authentication technology that prevents people other than the owner and administratively approved parties from operating the vehicle. However, these authentication methods could easily be worked around. Facial recognition, for example, can easily be manipulated with something as simple as a pair of glasses. This often doesn’t even require any technical expertise.
- On-road controls. In a more technically savvy hack, it’s possible for a malicious party to gain access to the on-road controls of the vehicle. By issuing commands directly to the navigation system, a hacker could conceivably remotely control the vehicle. This is the most direct and most obvious line of attack, and the one most likely to attract new security features from developers.
- Performance changes. It’s also possible for hackers to issue commands or make changes that alter how the vehicle performs in a live environment. For example, one Chinese security firm demonstrated the potential to “jam” various sensors of a Tesla S, making certain objects practically invisible to its navigation system, thereby making a collision likely. Of course, it’s also possible for someone to make a collision likely by cutting your brakes or letting the air out of your tires.
- Manipulation of existing rules. There will also be ways of abusing the built-in rules that dictate how self-driving cars operate. For example, most self-driving cars will detect and avoid crossing solid lines on the road—but one clever artist figured out he could “trap” self-driving cars into a self-contained circle by surrounding it with solid drawn lines. This is likely the first of several examples of such vulnerabilities.
Is Anything Unhackable?
There are several ways to hack or manipulate a self-driving vehicle, but then, it’s possible to hack or manipulate any technological system. You can ward off the majority of attacks with simple steps like choosing a strong password or one-time pad encryption for your transmissions, but with enough expertise, creativity, and/or brute force, it’s possible to break into anything. The question is not whether self-driving cars can be hacked, but whether the hacks are easy enough or inviting enough to be commonplace.
Motivation
Cyberattacks are usually driven by profit or spite, but would these motivations inspire people to attempt to attack self-driving vehicles? The profit angle is possible with something like ransomware, but we may need to be more concerned with spite as a motivator. For example, truckers and other professional drivers are already beginning to lose their jobs due to automation; after losing their livelihood, someone may feel motivated to forcefully prove that self-driving vehicles are dangerous.
An Ambiguous Future
Of course, fully self-driving cars aren’t yet market ready, and the top competitors in the field are still scrambling to outdo one another. Models are constantly changing, and underlying tech is constantly getting better, so it’s hard to form a definitive conclusion about the vulnerability of self-driving cars. Nothing is hack-proof, but hopefully, with enough built-in security features and enough consumer attention, the odds of a self-driving car cyberattack are low.