For several years now, full self-driving vehicles have been just around the corner. That’s the promise, anyway, from Tesla’s CEO, Elon Musk, who has tweeted as much on multiple occasions. On an earnings call in January, Musk told investors and analysts that he was “highly confident the car will be able to drive itself with reliability in excess of human this year.”
Of course, a few months ago, Musk admitted that building an autonomous vehicle is harder than he thought. Still, even with that admission, Musk was insistent that the technology is coming sooner, rather than later.
It would almost be humorous until you realize that the fact people actually believe him has led to more than one fatal collision. Now, the National Highway Traffic Safety Administration (NHTSA) is investigating 11 such incidents that involve Teslas with autopilot enabled colliding with stationary emergency vehicles. At least one of those resulted in a fatality.
You know, the kind with bright flashing lights meant to alert drivers to their presence? Apparently–at least according to a report in The Verge, the autopilot software may be programmed to ignore objects that aren’t moving. That makes sense when you’re talking about signs or buildings, but vehicles with flashing lights stopped along the side of the road are probably an important thing to notice.
There’s no question that building a self-driving vehicle is sort of the holy grail of the automotive industry. Tesla isn’t the only company working on it.
On the other hand, Tesla is the only company with a CEO that insists on marketing driver-assist technologies as “full-self driving,” even when the car can’t drive itself. I know that many enthusiasts will argue over definitions, rationalizing some way that Musk is technically right. Except, words matter. Words create expectations, and expectations are everything when it comes to credibility.
I don’t know what else a rational person would expect from a feature billed as “full-self driving,” but it seems reasonable to expect that would mean the car is capable of driving itself without interaction from a human. It definitely seems like it should mean that your vehicle isn’t going to crash into a State Trooper’s patrol car.
By the way, I’m not suggesting it won’t ever work. I have no doubt there are very smart people at Tesla working on this very problem. I’m just suggesting that it’s obviously a very hard problem, and no one is expecting Tesla to solve it any time soon. That’s why it’s so confusing that Musk continues to make promises the company can’t keep. Perhaps it’s meant to create hype and anticipation, but really it’s an unforced error that does nothing but erodes trust and credibility.
Even Tesla’s head of autopilot software, CJ Moore, has made it clear that Musk’s claim about self-driving capabilities “does not match engineering reality.” In addition, in a memo to California’s Department of Motor Vehicles, Tesla’s general counsel said that “neither Autopilot nor FSD Capability is an autonomous system, and currently no comprising feature, whether singularly or collectively, is autonomous or makes our vehicles autonomous.”
That’s a lot different than the company’s claim on its website, which allows buyers to purchase “full self-driving capability.” It explains the capability as being able to navigate on autopilot, change lanes automatically, park, and identify and respond to traffic lights and stop signs. It also says it includes “Full Self-Driving Computer.”
The problem is that what Musk is promising doesn’t align with the expectations of Tesla’s customers. It doesn’t even align with what the company is telling regulators because what Musk is promising isn’t reality. At some point, if you can’t deliver, stop making promises you can’t keep.
That’s not just a marketing problem. Even though people might simply chalk up the comments to Musk’s Twitter personality, or his showmanship, words have real consequences. At a minimum, this many broken promises damages the credibility of Musk and Tesla, something that seems relatively important when you’re trying to convince people to trust you with technology that literally affects their safety and peace of mind.
Worst-case scenario, it leads people to place too much trust in something that isn’t what it seems. As the NHTSA’s decision to investigate shows, the consequences for that are a lot worse than lost credibility.