Posted by doener 3 days ago
People these days letting the car drive, thinking they can spring into action I think are underestimating just how cold their cache lines are getting and the major page fault they're going to take when they try to take over.
And I've seen comments by people that they were letting the car drive itself into a bad situation they could see developing, but didn't jump in to take over right away in anticipation (effectively betting on the car over their own skill but still realizing they had to jump in if the car got it wrong--which is just so incredibly confused).
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
Too bad that project failed.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
I wonder what's taught to new drivers about this sort of situation. My intuitive feeling (driving for almost 30 years) is you drive with the flow of traffic when traffic is present. I don't see too many left lane drivers glued to speed limits, but it's obvious when someone is a fast or slow.
Is that code for "the Tesla was following the law by driving within the speed limit and I don't find that acceptable" or what?
> I passed on the right rather than tailgate.
... right, since those are the only two options. Tailgating is just one of the potential valid options to choose from after all.
> And driving 10-15 mph slower than you’d expect in that lane.
So not "slower than the speed limit", but rather "slower than you'd expect". Sigh.
Most highways I drive on exhibit a predictable pattern. Slower folks in right lane. Faster folks in left lane. Maybe those slower folks are at the speed limit, or above, or below. Left lane folks somewhat faster.
Should everyone obey the speed limit? Sure! Hard to argue that point.
My observation was a Tesla driving at - let's call it "right lane speed" in the left lane. Maybe slower. Slow enough that you'd soon see a predictable back-up behind the car - some tailgating, brake usage, etc. The stuff that in my view leads to more accidents, swerving, and phantom traffic that occurs when people pile on each other, use brakes excessively, and end up slowing to a crawl.
FWIW: The "is speeding acceptable" question is somewhat resolved by police. I rarely see people pulled over for speeding within the flow of traffic, vs. somewhat swerving in/out or just driving much faster than everyone.
Don't remember the last time I saw an officer pick a car out of a normally flowing left lane to issue just that one driver a ticket.
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.
(Jumping a light in the dark? Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter? I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.)
I don't know what this means.
> Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter?
Navigating involves reading street signs, block numbers, and traffic markings. These are all visual elements that can distract from safety monitoring. How many minor accidents result from driver's trying to figure out where they are, or need to go?
> I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.
My claim isn't that safety is less important in city driving, it's that driving is far safer due to lower speeds. There's more time to react and lower risk of catastrophic results when driving at 35mph. The challenge for a driver isn't sudden loss of control as you may experience at 65+mph. The city driving challenge is trying to track markings, signage, pedestrians, and parked cars while also navigating and managing the vehicle's basic operation. FSD can track all of that without distraction and leave the driver responsible for more human reasoning tasks.
You failed, in this case by hastening to cross the intersection as soon as the light came green, to account for the possibility of another driver's error. If you weren't taught to do that, as I was, then the mistake is not entirely your own. It was still a mistake, which you have already acknowledged would have led you into an accident had your vehicle not rescued you.
> There's more time to react and lower risk of catastrophic results when driving at 35mph.
Not for me. You're the one wearing power armor, remember.
Even if we accept your interpretation of the situation as true, you're making the case for FSD. You can think of FSD (or other self-driving solutions) as raising the floor for bad drivers. If I'm a driver with some otherwise dangerous habits (nobody is perfect) then FSD is filling the gaps in my skill.
> Not for me. You're the one wearing power armor, remember.
But this is a joint interaction between the pedestrian and the vehicle. I can't make the pedestrian more aware. I can't give the pedestrian super-human reaction time. I can, however, give those traits to the vehicle. That's a major selling point for autonomous vehicles.
As a pedestrian, I don't need superhuman reaction time, because unlike some I move at speeds a human mind can comprehend. Nor, I promise you, need I be "more aware" - what a frankly foolish thing to say, when there is nothing on Earth even remotely as dangerous to me, day to day, than you and those like you! I assure you, I am about as aware as it is possible to be. I have to be! Look at you.
But this again is a splendid illustration of the problem, for which I again must give my gratitude: the old-school motorhuckle lifestyler dingbats were right all along, it turns out, to call cars "cages." You carry yours around in your head all the time, I see.
I was watching the Tesla display on my way back home from LaGuardia airport last week (passenger, not driver).
No accidents or close calls, but it was obvious that I might be focused on 1 or 2 things in that very busy and chaotic environment whereas the car (FSD or otherwise) sees more than 2 things and possibly avoids something on my behalf.
...so you hired a professional to do that job for you, instead of risking the wellbeing of everyone nearby. This was the correct decision!
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
I agree that there are situations where what I do as a trained driver is different from augmented cruise.
A good example (or perhaps I'm wrong) is this: in a lane, car pulls into lane in front of me and between the car further ahead. Now I don't have enough space in between me and that new entrant. But instead of using brakes (unless eggregious), I bleed speed until I make space I want. Augmented cruise doesn't do that - it hits brakes.
So, from behind, I think it looks like I'm using my brakes a lot more than I am when on augmented cruise. And excessive brake use distracts the driver behind me.
I've got plenty of experience, and (feel as though) I know most of it's failure points. I had to drive my 30 minute commute last week, and it was decidedly unfun. I have seen the future and I don't want to go back.
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
> because FSD deactivated itself 4 seconds before impact
This isn't accurate. The driver deactivated FSD 4 seconds before impact. Don't get me wrong, the video looks pretty much like FSD wouldn't have been able to do anything better than the driver did, but she didn't give it a chance.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
How does a driver judge what is and is not "dicey" from the FSD's perspective?
If you don't have confidence in FSD, then you wouldn't use it in the first place. If you do have confidence, then why would you ever (or how often) would you take over?
Is there some kind of 'confidence gauge' that the FSD displays in how well it thinks it can handle the situation? If there is/was, perhaps the driver could see it dropping and prime himself to take over.
By anticipating further ahead. If it finds itself into a situation that it can't get itself out of, it means it should have made more defensive choices earlier or relinquish control earlier. And if it doesn't have either the reasoning capacity or the spatial awareness data to do that, it is not fit for general usage and should be pulled.
I agree you right in that's what you expect to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds
AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.
EDIT: IIRC the threshold for disengagement is 1s.
> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.
For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.
2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc
Individual tragic anecdotal incidences aside the vagueness of the article really diluted the merit of the claims.
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
Also, they don't tout a single party line.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
FSD has built this generation's newest children of the magenta line.
Or LLM users.