Former Uber autonomous driving head personally involved in a Tesla FSD accident: I was driving when I suddenly crashed, and my child was in the back seat

robot
Abstract generation in progress

IT之家 March 18 news: Mozilla Chief Technology Officer and former head of Uber’s autonomous vehicle department Raffi Krikorian published a long piece in The Atlantic on March 17, describing a serious crash that occurred when he was driving a Tesla Model X (FSD mode), and analyzing the core problems with Tesla FSD from a professional perspective.

Krikorian previously led Uber’s autonomous driving team, and was responsible for training safety drivers on how to intervene in a timely and correct manner when the system fails. During the two years he led the department, Uber’s early pilot projects maintained a record of zero injuries.

The accident happened on an ordinary Sunday trip. Krikorian was driving his son to a Boy Scouts activity, traveling along a residential neighborhood street in the Bay Area—one he had already driven on hundreds of times. At the time, the Tesla was in FSD mode; the system drove smoothly for a while until the unexpected occurred.

When the Model X entered a curve, the FSD seemed to suddenly lose its sense of direction. The steering wheel shook violently without warning, and the vehicle began to slow down. Krikorian immediately grabbed the steering wheel, but it was already too late to prevent the outcome. The vehicle then crashed into a concrete wall and was immediately totaled. Krikorian suffered a concussion and neck stiffness; his headache lasted for several days. Fortunately, the child in the back seat was unharmed.

Despite having top-tier professional credentials, FSD “tricked” him. Krikorian wrote that at first he used FSD only on highways, because lane markings were clear and traffic patterns were predictable. After getting familiar with it, he started using it on regular roads as well and found the results to be good, gradually turning it into a habit.

Before this accident happened, his hand had been on the steering wheel at all times, keeping alertness as required by Tesla. He pointed out that FSD had actually “trained” him to trust it. After the accident, the name appearing on the insurance report was his—not Tesla. Under the current legal framework, this is also a common feature of all FSD accidents: the Tesla FSD system is classified as Level 2, and the driver must bear full responsibility at all times.

Krikorian also raised a sharp question about how Tesla handles data. The vehicle continuously records the driver’s hand position, reaction time, and gaze trajectory. After an accident, Tesla often uses this data to shift responsibility onto the driver. But drivers who are required to obtain their own data often only receive fragmented information. In a landmark wrongful-death case in Florida, the plaintiffs were forced to hire a third-party hacker to recover key evidence from the crash vehicle’s chip, while Tesla claimed the data could not be found.

Krikorian also analyzed a fundamental flaw in “supervised” autonomous driving. His core argument is this: Tesla requires humans to supervise a system that is specifically designed to make that supervision seem meaningless. As he put it, an unreliable machine keeps people alert; a perfect machine does not need supervision; but an almost perfect machine creates a trap—making the driver trust it to the extent that they ignore supervision. This is also the commonality in almost all Level 2 assisted-driving vehicle accidents right now: even before a crash truly happens to them, people insist that the assisted-driving features are perfectly reliable—sometimes even driving to sleep.

Psychologists call this idea “vigilance decrement”: supervising an almost perfect system for a long time becomes tiresome, and that weariness leads to mind-wandering/not paying attention. After the assisted-driving system disengages, drivers often need 5~8 seconds to refocus. In an emergency, there is simply not enough time to react and take over.

Krikorian cited a study by the American Highway Safety Insurance Association: after using adaptive cruise control for just one month, the likelihood that drivers would look at their phones increases by more than six times. While Tesla warns FSD users not to get complacent (i.e., be self-satisfied), the smooth performance in 99% of scenarios instead triggers exactly that kind of complacency.

He also cited two widely known accidents to illustrate how this “unreliability” manifests. In the 2018 Mountain View crash, in which Apple engineer Walter. Huang was driving a Tesla, there was a 6-second warning before the car surged toward the concrete median barrier—but he never touched the steering wheel. In the same year’s Uber crash in Tempe, Arizona, sensors detected a pedestrian 5.6 seconds earlier, but the safety driver only looked up in the final less than 1 second.

In this accident, Krikorian himself did take action. However, he needed to overcome the trust inertia built over the long term and switch from a passenger state back to a driver state within the last second—so it can almost be said to be impossible. The crash vehicle logs also confirm that he turned the steering wheel, but he still could not complete the reversal within the 1 second before the accident.

Based on Krikorian’s description: Tesla first made the driver develop a liking for FSD, then weakened the driver’s vigilance with months of smooth performance, leading the driver to develop psychological reliance on this “assisted-driving feature that is not actually 100% reliable.” Finally, when things go wrong, Tesla invokes the service terms to shift responsibility to the driver—Tesla gets praise when FSD does not have a problem, and the driver is fully responsible when it does.

Krikorian also specifically gave an example involving BYD (002594). In July 2025, BYD announced it would cover accidents caused by its automated parking feature—without needing insurance claims and without affecting the driver’s record. Although the number of cases is limited, this shows that splitting responsibility between automakers and drivers is an option and not an impossibility.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin