Tesla’s lawsuit against former employee Martin Tripp alleges that he hacked computer systems to steal intellectual property, not to harm drivers of the company’s cars.
But the idea that a malicious insider could successfully tamper with software used in the vehicles’ battery testing process is more fodder for worst-case scenarios raised by lawmakers over self-driving cars.
In May, the House Financial Services Committee discussed how autonomous vehicles could impact the insurance industry. It was the third Congressional hearing on the safety of autonomous cars in the past year. A bill to create a “Driving System Cybersecurity Advisory Council” within the Department of Transportation was introduced in July 2017 to create standards and controls over testing and deploying self-driving cars. It’s one of four current bills circulating in Congress to deal with the lack of federal standards regulating the security of systems that make and operate self-driving cars.
“There are a number of people out there that are somewhat resistant to entrusting their lives with autonomous vehicles,” said Sen. Sean Duffy (R, Wisconsin) at the most recent hearing.
The incidents described in CEO Elon Musk’s email to employees and the company’s lawsuit against the former employee are jarring because they show how much access insiders have to critical systems of these vehicles, and how difficult it might be to determine whether they are altering code on machines that test the cars.
Cybersecurity professionals have demonstrated how to hack into the infotainment systems of several vehicle brands over the years. These demonstrations have shown that, while it’s fairly easy to break into the computer systems that control dashboard computers, getting deeper into the systems that actually run a vehicle – and control its steering, acceleration and braking — is much harder. It is often difficult to get to these computers physically, and they typically aren’t connected to the internet or remotely available, making it necessary for an attacker to have physical access to the device.
It’s even less likely outside attackers could get access to computers used in vehicle testing.
But insiders have far greater access. Employees may not only have physical access to the critical systems that run manufacturing or program car components, but they may know important information that allows them to write code that can cause meaningful damage to the vehicle.
Tesla has positioned itself as a pioneer of transparent security practices, and invited hackers to exploit its weaknesses from the outside. It has one of the industry’s most active and robust “bug bounty” programs, which is an organized protocol for allowing outside hackers to test corporate systems. Hackers that find and report security issues can earn money through the program, from $100 to $10,000 per vulnerability, according to its third-party bounty provider, Bugcrowd.
Incidents like those described in Tesla’s lawsuit prove it’s hard, if not impossible to weed out insider threats even at very sophisticated technology companies. That’s because the conditions that turn a regular employee into a malicious hacker change while they are employed, like being passed up for a promotion, having an internal dispute or discovering something the employee finds objectionable about the company or work.
Source: Tech CNBC
Tesla's alleged rogue employee is exactly what Congress is worried about with self-driving cars