Posted by doener 3 days ago
Have you been in a Waymo? SAE Level 4 is here, and it’s safer than humans [1].
Vastly VASTLY prefer Waymo. It's very good at its mission and is, at minimum, infinitely better than being in an Uber rideshare. I'd rather wait 20 minutes for a Waymo than 5 for any Uber or 0 to use my own car.
Ironically, Waymo got me much more interested in using my city's public transportation offering which is much better than I previously thought.
That said, Tesla FSD v14 is the best autonomous option for a supervised system that you can actually use.
I haven't seen a good criticism of their methodology. If you have one, I'd be curious about their take.
On a more-direct measure, Waymos have had starkly lower fatalities and at-risk incidents than human drivers on average and, I think, near their best.
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".
You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".
If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).
That does leave you all sorts of room to come up with realistic trolley problems.
But all require a human (or malicious) driver on one hand. The more rule-following AVs on the road, the fewer the opportunities for such trolley problems.
And I'd still argue that debating these ex ante is, while philosophically fascinating, not a practical discussion. I'm not seeing a case where one would code anything further than collision avoidance and e.g. pre-activating restraints.
The typical human preference WRT the trolly problem ("don't take an action which leads to deaths, even if it would save more lives") is also a reasonable -- maybe the only reasonable answer -- to these hypotheticals.
Ie, move against the light to avoid getting rear ended, but not if you're going to run over a pedestrian or cause an accident with another vehicle trying to do so. (Even if getting rear ended would push you into the pedestrian or other car.)
What is the lowest likelihood of your own death you'd find acceptable in this situation?
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
Not all companies do illegal things.
IMO it’s also a distraction to blame it on “capitalism” or some “larger trend” rather than just pointing directly at the company and people responsible.
“The system is broken” line hasn’t worked for years now. Maybe if we stop blaming the system and start blaming the people?
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
You’re correct inasmuch as we have no evidence there is “a significant problem.” But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
Why? They only pay out if you’re at fault. And if there aren’t final judgements in a deep pipeline of cases, premiums wouldn’t have a reason to adjust yet.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
edit to respond to Forgeties79:
> The issue is they are potentially lying. It’s why we are even having this discussion. The numbers could be fraudulent
When your vehicle gets into a collision, no one contacts the auto manufacturer about who was at fault. Suppose two cars collide. The police write a report, collect evidence, maybe the drivers submit their video recordings to the insurer.
But no one is calling Tesla and asking them to determine who was at fault. And if they did, Tesla would say we never agreed to be liable, and the driver should have been paying attention. There is no way to escape that if it was costing insurers more to insure liability for a Tesla, they would be asking for higher premiums.
Whether or not Tesla is lying to the government or whoever is irrelevant for the goal of determining if Teslas cause more damage than other vehicle brands.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesn’t take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and it’s usually easier to weed out by claims history than running models on small premiums.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brand’s. I don’t think Ford/Toyota/anyone else’s software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
Fair enough. I agree with you in the long run. I just don't think we've seen the litigation that will define liability play out yet.
> Does the data from Tesla even come into play for an insurer?
Directly? No. At least, not unless AI actuaries make the work worth the while.
For juries calculating damages? Plaintiffs weighing whether to bring a case? Sure. That, in turn, plays into liability. And that is something insurers care about.
> In the meantime, a collision with a Tesla is the same as any other auto brand’s
In the meantime, yes. If collisions with Teslas predictably result in larger damages than with other brands, you'd expect to see more litigation when a Tesla is involved/suspected at fault, and with that, higher costs.
> No auto brand picks up the liability for the driver
Tesla has been assigned liability already [1].
[1] https://law.marquette.edu/facultyblog/2025/08/jury-awards-24...
And unfortunately, musk has earned people’s default skeptical stance towards him.
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
If autopilot was missleading, full self driving is too?
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
This terrible statistic can’t just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only “ahead” of the competition because the competition is unwilling to sell an unsafe system.
Tesla’s fatality statistics don’t share my bias and speak for themselves.
At some point “it’s the driver’s fault” turns into “bad vehicle design” when enough scale is applied. Tesla is not some kind of low-volume niche automaker.
If I write software where the “Save” button is small and grey and the “discard changes” button is bold and blue, it’s not the user’s fault when many of my users lose their data. I was responsible for bad design.
It was Tesla’s choice to remove physical controls, make their screen as distracting as possible, make their cars accelerate faster than they need to despite competing in mainstream vehicle segments, deliver FSD with misleading promises in beta state, remove manual door releases from the vehicle trapping people inside, etc.
Basically the same as Kia. Why are Kias so bad?
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
This would affect both driver selection and performance during impact
Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
Why are we looking at this from a perspective of making an attempt to exonerate Tesla first? Why not trust the data until a better explanation arises?
Basically you’re just guessing that it’s the drivers’ fault, but that’s not what the data says specifically.
I am not guessing anything but suggesting that you’re making up narratives that make you feel good. I prefer truth and honesty over feeling good.
Even the links given to Snopes and a Tesla employee/executive in this discussion didn’t debunk it.
The Tesla employee just said “nuh uh the denominator is bigger” as if we should believe a non-independent source.
I think it’s hilarious the lengths people will go to give Tesla the benefit of the doubt. Nobody would be questioning a study like this if Ford was the discussion topic. We would all just be saying “oh yeah, that checks out, Ford slaps together garbage vehicles, I’m not surprised.”
I have no feeling about weather it’s true or not. Just tired of folks like yourself that have no issue perpetuating bad reporting or misinformation. It is like those comments on news articles, sad.
I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.
https://en.wikipedia.org/wiki/ISeeCars.com#Partnerships
https://x.com/larsmoravy/status/1860100416819855492
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
This doesn’t show that ISeeCars is biased against Tesla specifically. Certainly their methodology could be flawed, but we don’t really have anyone else debunking them directly, and we have no evidence that it’s a hit piece.
IIHS crash test scores being good are the only counterpoint and those are synthetic in nature: crash test in a controlled environment. They can’t test for things like whether the occupants could get out or whether autopilot hallucinated and got into an accident that would have otherwise not happened.
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
Tesla sells too many vehicles for it to be a “self selecting driver population” thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
“That’s it? If you’re gonna die, die with us?”
I don’t like Elon but I also don’t think fiction and misleading stats serve anyone.
It's not an apples-to-oranges comparison.
The lengths people will go to defend Tesla continue to astound me. Can’t we just say that they suck without making excuses for them?
Like this: https://www.euroncap.com/assessments/tesla/model%2B3/1110/
Something has to be flawed or there has to be some bias.
Synthetic controlled environment crash tests can’t test for a lot of fatal and documented problems with Teslas like electric door latches trapping occupants, distracted drivers due to most vehicle functions being screen-based, owners who think the vehicles are more autonomous than they really are due to misleading marketing, and other issues that crash tests just don’t catch.
I.e., They might do well once they’re in a crash but they might get in more crashes per mile leading to more fatalities per mile and a crash test can’t show that.
I can even accept that they get into more crashes per mile because they aren’t road tripped as frequently as gasoline cars and are used in more high fatality settings (e.g., stroads have worse fatalities than highways). Obviously that in particular wouldn’t be Tesla’s fault.
Care-free drivers could be the result of their marketing for sure but ultimately it's on them and this is precisely the bias I'm talking about.
Look, I do drive a Model 3. I like some parts and hate the others. Calling this "Autopilot" should have been illegal. But it does sound quite weird to me to insinuate they aren't safe as cars when European agencies found no safety issues with them.
lol. Those same European agencies you claim find the Tesla to be super safe are the ones mandating physical controls in future vehicles.
https://www.autoblog.com/news/europe-and-china-now-require-p...
Study on the distracting touch screens:
https://www.washington.edu/news/2025/12/16/video-drivers-str...
Eyes off road during level 2 autonomous driving:
https://www.sciencedirect.com/science/article/abs/pii/S00014...
Teslas can do well in synthetic crash tests and still get into more fatal accidents per mile for other reasons. The two concepts aren’t mutually exclusive.
I’d be fine with admitting that it’s because EV owners put fewer highways miles on them. But Tesla’s dismissive and combative reputation on safety along with removing simple controls like turn signal stalks isn’t doing them any favors.
I own a Mazda from their physical controls era (sadly somewhat coming to a close with the 2027 CX-5). The idea that the median driver is an uber driver juggling two phones is not realistic. My Mazda does not allow me to operate CarPlay with the touch screen while I’m driving. This alone stops me from messing with it. I can operate common functions by feel with physical buttons in the vehicle without looking at the screen.
I can operate the climate controls, mirror adjustments, turn signals, gear shifter, windshield wipers, air vents, drive modes, basically everything without looking completely by touch and muscle memory.
As a model 3 owner you have to admit your Tesla is basically an impossible rental car for the average person who has never driven one. They’ll have to take a tutorial on how to shift with a touch screen, how to adjust mirrors, where are the turn signals, how to turn on the windshield wipers, etc.
Now imagine if every car brand was Tesla and they all had completely different touch screen software designs and everything was on the screen. I think Avis and Hertz might go out of business or have to start their own car company to compensate.
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?