Posted by mhb 4 days ago
And here's one of Elon's mentions (he also has talked about it quite a bit in various spots).
https://xcancel.com/elonmusk/status/1959831831668228450?s=20
Edit: My personal view is that LiDAR and other sensors are extremely useful, but I worked on aircraft, not cars.
- cost (no longer a problem)
- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.
Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.
I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you
I would argue that yes, we do use vision but we get that "lidar depth" from our stereo vision. And that used to be why I thought cameras weren't enough.
But then look at all the work with gaussian splatting (where you can take multiple 2d samples and build a 3d world out of it). So you could probably get 80% there with just that.
The ethos of many Musk companies (you'll hear this from many engineers that work there) is simplify, simplify, simplify. If something isn't needed, take it out. Question everything that might be needed.
To me, LIDAR is just one of those things in that general pattern of "if it isn't absolutely needed, take it out" – and the fact that FSD works so well without it proves that it isn't required. It's probably a nice to have, but maybe not required.
You're listening to the road and car sounds around you. You're feeling vibration on the road. You're feeling feedback on the steering wheel. You're using a combination of monocular and binocular depth perception - plus, your eyes are not a fixed focal length "cameras". You're moving your head to change the perspective you see the road at. Your inner ear is telling you about your acceleration and orientation.
* someone parking carefully, misjudges depth perception, bumps an object
* person driving at night, their eyes failed to perceive a poorly lit feature of the road/markings/obstacles
* person driving and suddenly blinded by bright object (the sun, bright lights at night)
* person pulling out in traffic who misinterprets their depth perception and therefore misjudges the speed of approaching traffic
* people can only focus their eyes at one distance at a time, and it takes time to focus at a different distance. It is neither unsafe nor unexpected for humans to check their instruments while driving -- but it can take the human eye hundreds of milliseconds to focus under normal circumstances -- If you look down, focus, look back up, and focus, as quick as you can at highway speeds, you will have travelled quite a long distance.
These type of failures can happen not as a result of poor decision making, but of poor perception.
However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.
The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.
But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.
We often greatly underestimate / undervalue the role of our ears relative to vision. As my film director friend says, 80% of the impact in a movie is in the sound
https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...
This company claims their LIDAR works conservatively at 250m, and up to 750m depending on reflectivity
https://www.cepton.com/driving-lidar/reading-lidar-specs-par...
It's not only failing, it's causing false positives.
Sufficient to build something close to human performance. But self driving cars will be held to a much higher standard by society. A standard only achievable by having sensors like LiDAR.
Whether thats worth completely throwing away LiDAR is a different question, but your argument is just obviously false.
They also have several cameras all around providing constant 360° vision.
Now you might say "use a depth model to estimate metric depth" and I think if you spend 5 minutes thinking about why a magic math box that pretends to recover real depth from a single 2D image is a very very sketchy proposition when you need it to be correct for emergency braking versus some TikTok bokeh filter you will see that also doesn't get you far.
The reports that Tesla submits on Austin Robotaxis include several of them hitting fixed objects. This is the same behavior that has been reported on for prior versions of their software of Teslas not seeing objects, including for the incident for which they had a $250M verdict against them reaffirmed this past week. That this is occurring in an extensively mapped environment and with a safety driver on board leads me to the opposite conclusion that you have reached.
But I think costs were just part of the reason why Elon decided against Lidar. Apparently, they interfere with each other once the market saturates and you have many such cars on the same streets at the same time. Haven't heard yet how the Lidar proponents are planning to address that.
They don’t focus on safety or effectiveness except to say that vision should be ‘sufficient’. Which is damning with faint praise imho.
If that link was to try and argue that the removal of sensors makes perfect sense i have to point out that anyone that reads that would likely have their negative viewpoint hardened. It was done to reduce cost (back when the sensors were 1000’s) and out of a ridiculous desire by Musk for minimalism. It’s the same desire that removed the indicator stalk i might add.
I assume Musk, et al are acting in best faith in trying to find the right compromises.
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
One of Udacity's first courses was on self-driving, taught by Sebastian Thrun who later cofounded Waymo. He went through some Bayesian math that takes a collection of lidar points, where each point contributes to a probabilistic assessment of what's really going on. It's fine if different points seem to contradict each other, because you're looking for the most likely scenario that could produce that combined sensor data. Transformers can do the same sort of thing, and even with different sensor types it's still the same sort of problem.
The response to the challenge shouldn't be whittling down your sensor-suite to a single type, but to get good at sensor fusion.
We have lots of evidence of similar strategies being used in other domains, this seems like an especially life-critical domain that ought to have high rigor and standards applied.
It is pretty incredible but people will (rightly so?) hold automated drivers to an ultra high standard. If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.
This is evidently false. Robotaxi crash rates exceed human drivers', but there's not an effective regulatory agency to outlaw them!
https://futurism.com/advanced-transport/tesla-robotaxis-cras...
Why is it clearly false? It might be false, but clearly? I would definitely like to see evidence either way.
> I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
That sounds like you are trying to find reasons to get the conclusion you want.
If you go to the NHTSA's page regarding their Standing General Order[2] and download the CSV of all ADS incidents[3], you can filter where the reporting entity is Waymo and find 520 rows. If you filter where the vehicle was stopped or parked, you'll find 318 crashes. If you scan through the narrative column, you'll see things like a Waymo yielding to pedestrians in a crosswalk and getting rear-ended, or waiting for a red light to change and getting rear-ended, or yielding to a pickup truck that then shifted into reverse and backed into the Waymo. In other words: the majority of Waymo collisions are due to human drivers.
So either Waymos are ridiculously unlucky, or when these sorts of things happen between two human driven cars, it's rarely reported to insurance. In my experience, if there's only minor damage, both parties exchange contact info and don't involve the authorities. Maybe one compensates the other for damage, or maybe neither party cares enough about a minor dent or scrape to deal with it. I've done this when someone rear-ended me, and I know my parents have done it when they've had collisions.
If human driven vehicles really did average 229k miles between any collision of any kind, we'd see many more pristine older vehicles. But if you pay attention to other cars on the road or in parking lots, you'll see far more dents and scratches than would be expected from that statistic.
1. See page 13 of https://www.nhtsa.gov/sites/nhtsa.gov/files/2025-04/third-am...
2. https://www.nhtsa.gov/laws-regulations/standing-general-orde...
3. https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...
Tesla notes:
> These assumptions may contain limitations with respect to reporting criteria, unreported incident estimations (e.g., NHTSA estimates that 60% of property damage-only crashes and 32% of injury crashes are not reported to police
Given that Musk has a history of driving lower costs, it's unlikely he overestimated the long-term cost floor. He just thought we were close to self-driving in 2014.
Another factor is Andrej Karpathy, who was the primary architect for the vision-only approach. Musk wanted fewer parts, and Karpathy believed he could deliver that. Karpathy is still an advocate of vision-only.
And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.
https://www.researchgate.net/publication/378671275/figure/fi...
He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.
There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.
Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.
This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.
If the data were positive for Tesla, Tesla would publish it
They do not, so one can infer it is not flattering
(Before you post the "Miles driven with FSD" chart, you should know upfront (as Tesla must) that chart doesn't normalize by age of vehicle or driving conditions and is therefore meaningless/presumably designed to deceive)
also regulators gather srastics and if cars with something do better they will mandate it.
Tesla ""autopilot"" fatalities: 65
Waymo fatalities: 0
If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.
By some measures Waymo is actually at -1 fatalities. There has been one confirmed birth of a child in a Waymo. https://apnews.com/article/baby-born-waymo-san-francisco-6bd...
They might have flipped a switch after that, causing this.
It may just be faster to make lidar cheap. And lidar can do things humans can't.
It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues
Neither do cameras, or eyeballs.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
If we could make sensors that lets an autonomous vehicle drive reliably in any snow/rain where a human could drive (although carefully) then we're good. But we are a long way from that. Especially since a lot of sensor tech like cameras tend to fail in 2 ways, both through their performance being worse in adverse condition but also simply failing to function at all if they are covered in ice/snow/water.
It's significant that a truly hard problem like autonomous driving doesn't respond to a "brute force" management style. Rockets aren't in this category because the required knowledge and theory is fairly complete, whereas real autonomous driving is completely novel.
Hmm. Is it ragebaiting to respond to a tired and wrong statement by saying that it's tired and wrong and that the situation is merely the product of piss poor management decisions? People get understandably frustrated seeing the same wrong talking point that people with domain knowledge in computer vision and robotics have repeatedly explained is wrong in extremely fundamental ways.
> I don't own a Tesla.
n.b. The shoe/foot comment was not about you. It was about Musk. It wouldn't make any idiomatic sense for the expression to be about you given what you said and what you were responding to. If they'd said "pot, meet kettle", then it would have been about you. In that context, saying that you don't own a Tesla feels like a weird thing for you to insert in your comment. It potentially comes across as suspiciously defensive.
Tesla is spending upwards of $6B/year to Waymo’s $1.5B. Only one of these companies makes an autonomous robotaxi that’s actually autonomous.
Of course you do, you're driving at much higher speeds and so is the surrounding traffic. You can't just guess what you might be looking at, you have to make clear decisions promptly. Lidar is excellent in that case.
Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.
High performance fully automated driving that relies solely on vision is a losing bet.
It's frustrating to still see it repeated over a decade later. It was always bullshit. It was always a lie.
Then again, it's good that we have self-driving companies with lidar and without — we will find out which approach wins.
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
1) it's not cheap to produce lidars at a stable predictable quality in millions;
2) car driving training data sets for lidars are much scarcer (and will always be much scarcer due to cameras' higher prevalence) and at a much lower quality;
3) combined camera+lidar data sets are even scarcer.
2+3. BYD collects extensive training data from customers, much like Tesla does. They will have no trouble with training.
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
Single human eyes do resolve depth perception. Not as good as binocular vision, but you don't loose all depth perception of you lose an eye.
“Just buy FSD” isn’t a reasonable answer to a problem literally no other automaker suffers from.
It's also recently gotten much worse at lane departure sensing, often confused by snow or slightly faded road markers. Not pleasant to have the alarms go off while calmly and safely driving.
https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...
This conversational disconnect is as old as the hills:
1. Person 1 asks "what's wrong" (if it ain't broke don't fix it)
2. Person 2 wants to make something better
My meta-goal here on HN (and many places where people converse) is for people to step back and recognize the conversational context and not fall into the predictable patterns that prevent us from making sense of the world as best as we can.
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
There is also flagging abuse which effectively kills the comment /post.
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
Several companies, most notably Tesla, have done this well enough to drive in all manner of traffic. I'm not going to comment about if lidar is strictly needed or not to achieve better-than-human safety, that's yet to be proven one way or another by anyone. The point is that cameras + local inference can do a pretty good job at distance estimation
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
"Necessary"? Seems like a straw man, don't you think? I strive to argue against the strongest reasonable claim someone is making.
Lots of reasonable people suggest LIDAR is helpful to fill in gaps when vision is compromised, degraded, or less capable.
People running businesses, of course, will make economic trade-offs. That's fine. But don't confuse, say, Elon's economic tradeoff with the full explanation of reality which must include an awareness that different sensors have different strengths in different contexts.
So, when one thinks about what sensor mix is best for a given application, one would be wise to ask (and answer) such questions as:
- What is the quality bar?
- What sensors are available?
- Wow well do various combinations of sensors work across the range of conditions that matter for the quality bar?
- WRT "quality bar": who gets to decide "what matters"? The company making the cars? The people that drive them? regulators that care about public safety. The answer: it is a complex combination.
It is time to dismiss any claim (or implication) that "technology good, regulation bad". That might be the dumbest excuse for a philosophy I've ever heard. It is the modern-day analogue of "Brawndo's got what plants crave." Smart people won't make this argument outright, but unfortunately, their claims sometimes reduce to this level of absurdity. Neither innovation nor regulation are inherently good nor bad. There are deeper principles in play.
Yes, some individuals would use their self-proclaimed freedom to e.g. drive without seatbelts at 100 mph at night with headlights off. An extreme example, but it is the logical extension of pure individualism run amok. Regulators and anyone who cares about public safety will draw a line somewhere and say "No. Individual stupidity has a limit." Even those same people would eventually come to their senses after they kill someone, but by then it is too late.
There are probably even earlier statements from him against lidar...
Why are the commenters not pissed at the dozens of other car companies who have done absolutely nothing in this space? Answer: because it's not nearly as fun to be pissed at Kia or Mercedes or whoever. Clearly they are just enjoying the shared anger, regardless of whether it is justified.
Surely you already know this, so why pretend otherwise?
2. Other car companies are properly valued, Tesla is overinflated.
3. Other cars, even basic Hondas, have the same level of self driving as Teslas.
4. Other car companies don't lie to their customers about their capabilities or what they're buying.
This is not true at all. Don't confuse lane assist with self driving. And yes I'm aware people are upset by the "Autopilot" product name they chose for lane assist.
I think the frustration stems from the obvious falsehoods in the advertising, and the doubling-down on the tech, despite the well-documented weaknesses of the implementation.
My father lost vision in 1 eye and 50% in other one something like 20 years ago. He struggles in parking but otherwise doing ok without lidar. Turns out motion vision is more accurate after 10-20 meters than stereoscopic vision.
The appeal to human biology and argument against fusion between disparate sensors kinda falls flat when you’re building a world model by fusing feeds from cameras all around the car. Humans don’t have 8 eyes in a 360 array around their head. What they do have is two eyes (super cameras) on ~180 degree swiveling and ~180 degree tilting gimbal. With mics attached that help sense other vehicles via road noise. And equilibrioception, vibration detection, and more all in the same system, all fused. If someone were actually building this system to drive the car, the argument based on “how did you drive here today?” gets a lot stronger. One time I had some water blocking my ear and I drove myself to the hospital to get it fixed. That was a shockingly scary drive — your hearing is doing a lot of sensing while driving that you don’t value until it’s gone.
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
If this lowers Lidar costs, and Tesla has spent all this time refining the camara technology. Now have both.
Use both.
https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...
Because we want self driving cars to be safer than human driven cars.
If humans had built in lidar we would use it when driving.
“We should achieve self driving cars via replicating the human brain” strikes me as an incredibly inefficient and difficult way to solve the problem.
We have a tool that can tell with great accuracy how far away an object is. The suggestion that we should ignore it and rely on cameras that have to guess it because “that’s how humans work” is absurd, frankly.
Science would like to point out that rats also can learn to drive
https://theconversation.com/im-a-neuroscientist-who-taught-r...
Whether or not it'll actually work remains to be seen, but it's a perfectly reasonable strategy. One counterargument would be that the bitter lesson can be applied to LIDAR too; you don't have to use that data for feature engineering just because it seems well suited for it.
Waymo benefits from Google's unparalleled geospatial data. Waymo also has a support architecture that doesn't depend on real time remote operation, which can't be implemented reliably in almost all cases. You can't be following your supposedly unsupervised cars with a supervisor in a chase car. You can't even be driving remotely. Your driver software has to be able to drive independently in all cases, even those where it needs to ask a human how to proceed.
The difference between level two and level three driver assist and level four autonomy is like the difference between suborbital flight and putting a payload in orbit. What looks like a next logical step actually takes 10X or more effort, scale, and testing.
People on here used to buy servers themselves (very few of us still do), most now rent via cloud.
Why should transportation be different?
Good question, and for many it will not be, and rentals are acceptable.
But also for many, renting a car has a huge ICK factor. It is one thing while traveling to rent from an agency who has (purportedly) thoroughly cleaned and inspected the car before you get it. It would be quite another to rent cars like scooters, where the previous user likely smoked, left wrappers and food debris, and who knows what else, even damage. Plus, most people who own cars keep a fair amount of stuff in the car for their specific convenience, and have their own settings, etc.
The fact that the likes of Zipcar, Turo, and the lot have not entirely taken over urban transport but instead remain niche players shows the extent of this preference.
For suburban and rural markets, it just gets more extreme. How quickly could a rental service be able to deliver a car; could it reliably do it in less than 5-10 minutes for people to run an errand? If not, unless they are insanely cheap, ppl will likely want to own their own. Perhaps it'll be more of a hybrid, households owning one car and renting the spare for specific trips?
A lot of folks are relearning lessons on this front in Cloud right now.
Google doesn't do retail other than Chromecast and Pixel phones, and that is already annoying to them as it is because it involves something Google is notoriously bad at - actual customer support.
Starting up a car brand is orders of magnitude worse.
For one, people actually need to trust your brand to survive for at least five to ten years - cars are an investment, and a car that I can't trust to get safety-relevant spare parts (brake rotors, brake pads, axle bearings) all of a sudden is essentially an oversized paperweight. For a company such as Google, this alone (remember Killed By Google) is a huge obstacle to overcome.
Then, you need production. Sure, you can go to Magna or other contract manufacturers, or have an established large brand build vehicles for you, or you say you have to go the Tesla route and build everything from scratch. Either way has associated pros and cons.
And then, you need a nationwide network of spare parts, dealerships, repair shops and technicians that can fix the issues that people will get alone because the wide masses abuse cars in ways you might not even dare think about while testing, or because other people run into your cars and so your cars need repairs.
Even being a derivative of an established car brand can be a royal PITA. Let's take Mercedes Benz as an example with the 2003-2009 Mercedes-Benz SLR McLaren. On paper, it's a Mercedes vehicle, with a lot of the parts actually originating from stock Mercedes cars - but most dealerships will refuse to work on it. Either because they lack the support to even properly jack the car up, or because they lack the specialized tools for the AMG engine, or because they cannot even order the parts as Mercedes gates repairs for that thing to special shops. Or, again Mercedes, with Maybach luxury cars. The situation isn't as bad as with the McLaren, but their cars are challenging in another way - the S 650 Pullman weighs around 3 metric tons empty and is 6.50 meters long. Good luck finding a jack even capable of lifting that beast, most Mercedes sports-car shops don't carry jacks that are normally used to lift Mercedes Vito transporters!
Even Tesla, and they've been at it for the better part of two decades, still struggles with that. Their shitty spare parts logistics actually drive up not just insurance prices for their own customers, but for everyone - hit a Tesla with your Dodge and be at fault, and now your insurance has to pay out for months of a rental car because Tesla can't be arsed to provide the body shop the Tesla ends up at with spare parts in any reasonable time.
Established car brands however have all of that ironed out for many, many decades now. American, Asian, European, doesn't matter. And the spare parts don't even have to be made for cars: ask your local Volkswagen dealer to order a few pieces of "199 398 500 A" and one piece of "199 398 500 B" and you'll probably have a lead time of less than a day, at least in Germany - for the uninitiated: that part number belongs to the famous sausage, the second one to the accompanying curry ketchup, with more sausages being sold each year than actual cars.
And established car brands also bring something to the table: their own experiences with integrating smart technology. Yes, particularly German carmakers are notoriously bad in that regard, but for example Mercedes Benz was the first car brand in the world to get a certified Level 3 system on the road [1] and are now working on a Level 4 certification [2]. That kind of experience in navigating bureaucracy, integration and testing cannot be paid for in money.
tl;dr: I see no way in which Waymo goes to general availability regarding selling cars. They will run their own autonomous car fleets in select markets where they can fully control everything, but seeing Waymo tech generally available will be as part of established car brands.
[1] https://group.mercedes-benz.com/technologie/autonomes-fahren...
[2] https://group.mercedes-benz.com/technologie/autonomes-fahren...
Those bits should be easy, unless the OEM was tragically stupid. Where you'll get into trouble is when you need replacement computer bits; those are often tricky for mainstream brands, but if your niche brand ECUs all fail around the same time (wouldn't be the first time for a Google product), and the OEM isn't around to make new ones or make it right, off to the junkyard with all of them. If it's just normal failure rates, you can probably scavenge from totaled vehicles at junkyards even after new parts become unobtainium.
OEM style lighting will also probably get hard to find. Ideally a niche maker would lean towards standard parts there, but that's not the fashion of the times.
Well... just look at Tesla. A lot of their parts don't come from the classic supplier-OEM delivery chain model, but Tesla makes as much as they can on their own. It saves them a bunch of money, both when it comes to the profit margin of the supplier, and being at the whims of their supplier, but it is nasty for the customers when there simply is no parts OEM that one could go to when the vehicle manufacturer goes out of business or refuses to support the car any further.
> Where you'll get into trouble is when you need replacement computer bits
Oh hell yes. New EU law is particularly to blame here. OBD diagnosis always was nasty enough, you virtually always need to buy expensive diagnosis software and hardware (e.g. Mercedes XENTRY, VW ODIS, BMW ICOM)... but the newest requirements enforce live digital signatures and anti-tamper checks. Nasty as hell. And the buses itself... it's no longer just one CAN bus doing everything, not since the Kia Boys, it's multiple buses of different speeds, some using encryption on the wire, all making diagnosis, troubleshoots and repairs much more difficult than it used to be.
And that is before getting into the replacement parts issue itself that you wrote up.
Tesla did it, and is more valuable than most other car brands added together. They had a novel product: a good EV that was fun to drive. Is that a unique situation? Could a truly autonomous car launch do it?
Your arguments make sense in themselves, but maybe underestimate the revolutionary value that a level 4 car would provide.
Half of Tesla's value is hopium, the rest of it is pure trust in that the current government will continue propping Elon up (even if he personally ran afoul of the Dear Leader). A lot of the promises Elon made, particularly when it comes to FSD, had to be tracked back and I don't see them ever coming to fruition - at least not for the cars that don't have LIDAR hardware.
How much of Waymo's training data is based on LIDAR mapping versus satellite/aerial/street view imagery? Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city. The fact that this needs to happen suggests Google's geospatial data moat is not as wide as it seems.
If LIDAR becomes cheap, you could imagine other car manufacturers would add it cars, initially and ostensibly to help with L2 driver aids, but with the ulterior motive of making a continuously updated map of the roads. If LIDAR were cheap enough that it could be added to every new Toyota or Ford as an afterthought, it would generate a hell of a lot more continuous mapping data than Waymo will ever have.
Not entirely true. From their recent "road trips" last year, the trend is they just deploy less than 10 cars in a city for a few weeks (3-4 weeks from what I recall) for mapping and validating. Then they come back after a few months to setup infrastructure for ride hailing (depot, charging, maintenance, etc.) and start service.
But suborbital flight and payload in orbit is much less of a difference than you might think.
The delta V is not that significantly different. Scale is almost the same, and a little bit more power and (second stage) your payload is now hurtling around the earth instead of falling like an ballistic missile which was what their suborbital predecessors are.
Suborbital "trips" straight up, beyond the atmosphere, are very cheap.
That's true, and they have a huge headstart, but I wonder if all these cubesat companies can bring the price down on data enough that others will be able to compete.
Maybe their navigation system will be better than the competition due to real-time traffic data from Google Maps users, but I don't think it'll be so much better as to be an unbeatable advantage.
And from there it's easy to think: couldn't the car also detect white lines and stay within them? It doesn't have to be perfect; it can be cruise control++. If it errs a little, I can save it. But otherwise, this is a function I'd love to use if it was available, for a sub $1000 price point.
Like the difference between "what can do we with an LLM on my maxxed-out laptop with an RTX 5090 card" vs. "what can we do with a mac mini." Self-driving car version.
https://www.fccidlookup.com/report/tesla-new-millimeter-wave...
Strategy: The move brings Tesla's sensor approach closer to competitors like Ford, GM, and Rivian, who utilize multi-modal systems (cameras plus radar) for their driver-assistance features.
Potential: This 'HD radar' could provide critical redundancy and data needed for achieving higher levels of driving automation and improving system performance in all conditions.
I personally find it convincing that the problem with self-driving is mostly that the models aren't intelligent enough, and that adding LiDAR wouldn't be enough to achieve the reliability required. But I don't know, I don't really work in that field so maybe engineers who have more experience with self driving might say otherwise.
Consider an exhaust condensation cloud coming from a vehicle's tail pipe -- it could be opaque to a camera/computer-vision system. Can you model your way out of that? Or is it also useful to do sensor fusion of vision data with radar data (cloud is transparent) and others like lidar, etc. A multi-modal sensor feed is going to simplify the model, which in the end translates into compute load.
Even if it’s an intelligence problem, it’s possible that machine intelligence will not get to the point where it can resolve anytime soon, whereas more sensors might circumvent the issue completely. It’s like with Musk’s big claim (that humans use camera only to drive); the question is not if a good enough brain will be able to drive vision-only, but if Tesla can make that brain.
I am skeptical that tesla has this solved but interested in seeing how it goes when as they move to expand their robotaxi service.
Sensors or intelligence, at the end of the day it’s an engineering problem which doesn’t require pure solutions. Sometimes sensors break and cameras get covered in mud.
The problem is maintaining an acceptable level of quality at the lowest possible price, and at some point you spend more money on clever algorithms and researchers than a lidar.
It turns out it’s the sensors that are easily damaged by high powered lidar lasers.
https://spectrum.ieee.org/amp/keeping-lidars-from-zapping-ca...
It's not safe just because it's infrared. And the claims that it's safe because of the exposure time is highly questionable, would you be okay with that for any other laser?
I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.
Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?
Laptops aren't generally being used in the same areas as cars though, so you wouldn't expect to see as many cases involving Windows Hello compatible laptops/cameras.
There was someone who had his eyes damaged by sitting next to a heater.
> Moving to a longer wavelength that does not penetrate the human eye allows new lidars to fire more powerful pulses and stretch their range beyond 200 meters, far enough for stopping faster cars. Now a claim of lidar damage to the charge-coupled-device (CCD) sensor on a photographer's electronic camera has raised concern that new eye-safe long-wavelength lidars might endanger electronic eyes.
> Producers of laser light shows are well aware that laser beams can damage electronic eyes. “Camera sensors are, in general, more susceptible to damage than the human eye,” warns the International Laser Display Association
"doesn't penetrate the human eye" seems a bit hand wavy, but I take it to mean "these length pulses in this wavelength are tuned to have the power not be enough to damage the eye". Camera lenses may not have the same level of IR filtering/gathering area or, if they do, there is nothing implying the image sensor has the exact same tolerances as the inside of the eye. From the same:
> Sensor vulnerability to infrared damage would depend on the design of the infrared filters
A heater usually damages the eyes through drying out/heating up the outside layer with constant high intensity, not by causing damage to the retina (post filtering). https://hps.org/publicinformation/ate/q12691/
> Furthermore, since the eye blocks the IRR, the eye begins to overheat leading to eye damage and possible blindness. Because of this, you should not look at the heater for an extended period of time.
Enough intensity of any wavelength is enough to damage any camera or eye of course, but the scenario here seems to be built around that question for the eye. Similarly, I've heard of Waymo's causing 6 mph accidents but no reports of eye damage from any car LiDAR. Despite that, in the above YouTube clip Marques Brownlee actively shows his camera being clearly damaged as its moved around.
So they don't care if that breaks my phone camera? Wtf?
I would imagine, even with safe dosages, there would be some form of cumulative effect in terms of retinal phototoxicity.
More so if we consider the scenario that this becomes a standard COTS feature in cars and we are walking around a city centre with a fleet of hundreds of thousands of these laser sources.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
Absorbing the laser isn't necessarily any good. Very hypothetically it could lead to cataracts.
Shame that perverts had to ruin that for us, it was kinda neat to point a TV remote as the camera and see the bulb light up.
Thanks! What a headache
Even mid-range sensors used in ADAS systems only cost $600-750. The long-range stuff that's needed for trucking or robotaxis is $1,500–6,000
* I have no way to estimate installation costs, but smartphones show that manufacturing at this scale doesn't need to increase total cost 10x more than the B.o.M.
There are SLAM cameras that only select "interesting" points, which are privacy preserving. They are also very low power.
They're just fancy cameras with synced flashes. Not Star Trek material-informational converting transporters. Sometimes they rotate, sometimes not. Often monochrome, but that's where Bayer color filters come in. There's nothing fundamentally privacy preserving or anything about LIDARs.
Right, but how likely is it that there will be LIDAR and no cameras (especially given the low cost of the latter)?
Pros and cons. :/
It'll never happen, but we need a bill of rights for privacy. The laypeople aren't well-versed or pained enough to ask for this, and big interest donors oppose it.
Maybe the EU and states like California will pioneer something here, though?
Edit: in general, I'm far more excited by cheap lidar tech than I am afraid of the downsides. We just need to be vigilant.
But also kinda weird. There seems to be a lot of fines for hospitals for example.
Some Portuguese hospital was fined €400,000 for ‘Insufficient technical and organisational measures to ensure information security’
Top 5 fines:
1 - Meta - Ireland - €1.2 billion
2 - Amazon Europe - Luxembourg - €746 millions
3 - WhatsApp - Ireland - €225 millions
4 - British Airway - UK - £183 millions
5 - Google - France - €60 millions
I wish every law barely got enforced this way.
I don't know about you, but on that income I would certainly not brush off such a fine as a "cost of doing business". Would it cause me financial trouble, or would it force me to sacrifice other expenses? Absolutely not. But would I feel frustrated at having to pay it, feel stupid for my mistake, and do my best to avoid it in the future? Absolutely yes.
There isn't a trend of increasing fines, nor has any fine even reached the cap, let alone applied multiple times for the recurring violations. Even more with the current US administration's foreign policy towards the EU.
While GDPR as a law is fine, with the exception of enforcement limitations, enforcement so far has been a complete joke.
And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.
Humans have always done mass surveillance on eachother. You don't need technology for that.
Scale matters.
While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.
> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management
And they almost exclusively deal in things that historical societies didn't even bother to regulate.
You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.
Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).
People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.
Mass surveillance does not cause everyone to magically get along.
Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?
Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point [2]. These feature a 120°(H) x 90°(V) FOV at 50m, with 2.7M points per second sampling.
Velodyne introduced Velabit in 2021, for $100. Boasting 100m range and a 60-degree horizontal FoV x 10-degree vertical FoV.
The article claims that:
> What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.
which is simply not true. Cepton (currently offering) and Velodyne (acquired by Ouster in 2023) have done this for years.
[0]: https://www.cepton.com/products/nova
[1]: https://www.cepton.com/products/nova-ultra
[2]: https://www.cepton.com/announcements/ceptons-nova-lidar-named-as-ces-2022-innovation-awards-honoree
[3]: https://lidarmag.com/2020/01/07/velodyne-lidar-introduces-velabit/Basically they're saying "we can catch up to China by 2028/2029" ||so please subsidize us||
Where? How? I'm only seeing the Nova on ebay for between $4000 and $5000.
Of course, ambitious pricing like this is all about economies of scale - sensors that are used in production vehicles are ordered by the million, and that lowers the costs massively. When the huge orders didn't materialise, the economies of scale and low prices didn't materialise either.
[1] https://web.archive.org/web/20161013165833/http://content.us...
https://www.forbes.com/sites/bradtempleton/2025/03/17/youtub...
The EU requires every new car to have Autonomos Emergency Braking. If LiDAR becomes cheaper than radar, this is a potential market of millions.
Radar is just cheaper than the number of cameras and compute, it's also not really a strict requirement.
Look at how the current cars fuck up, it's mostly navigation, context understanding, and tight manoeuvres. Lidar gives you very little in these areas
We find that the cases where lidar really helps are in gathering training data, parking, and if focused enough some long distance precision.
None of these have been instrumental in a final product; personally I suspect that many of the cars including lidar use it for data collection and edge cases more than as part of the driving perception model.
By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.
Lidar can help, but not really enough to be worth it.
Waymo used LIDAR in the realtime control loop. It combines LiDAR, camera, and radar data in real time to build a 3D representation of the environment, which is constantly updated.
I fundamentally don't trust any level 4 system that doesn't use LIDAR
You don't need the mm precision of lidar very often; we find that it offers nothing at speed over radar; and in tight manoeuvres the cameras we need for human park assist and ultrasonics do well enough.
It in not more accurate; but it is more precise, but that doesn't really matter. (Radar gives you relative speed directly, this is more important than a very precise point at highway speeds).
Let me guess, you heard this from Elon?
Waymo has driven tens of millions of autonomous miles with a serious injury/fatality rate dramatically lower than human drivers. The actual data shows the technology works. Tesla FSD still requires active driver supervision and is not legally or technically a robotaxi system. Comparing them as if they're at parity is wrong.
LIDAR gives direct metric depth with no inference required. Camera-only systems must infer depth from 2D images using neural networks, which introduces failure modes LIDAR doesn't have. Radar is very valuable when LIDAR and cameras give ambiguous data.
What metrics has Telsa overtaken Waymo? Deployed robotaxi revenue miles? No. Disengagement rates? No published comparable data. Safety per mile in driverless operation? No.
"people have driven coast to coast without a single intervention on FSD including parking spot to parking spot"
I find this claim very dubious. Prove it. Teslas never drive empty for a very good reason.
Writing this and linking to fake Wikipedia is actually hilarious.
They do not. They have a very small number of them open to a select number of people, not the general public. And they are limited to even smaller areas. You need to understand that Musk is NOT an engineer, he is more of a con man desperate to inflate tesla stock price. If he says self driving cars don't need LIDAR then they must actually need it.
https://futurism.com/future-society/polymarket-fortune-betti...
Polymarket user David Bensoussan has made $36,000 by betting against Musk's wildly optimistic self driving predictions.
linking to grokipedia feels like intentional rage-baiting.
Whats wrong with grokipedia its a bit less woke/far left wing, more balanced.
https://www.forbes.com/sites/alanohnsman/2025/08/20/elon-mus...
https://futurism.com/leaked-elon-musk-self-driving
For nearly a decade Elon Musk has claimed Teslas can truly drive themselves. They can’t. Now California regulators, a Miami jury and a new class action suit are calling him on it.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
Of course MicroVisiom is only claiming their LIDAR to be suitable for advanced driver assist, but ADAS encompasses a wide array of capabilities: basically everything between cruise control and robotaxis, so there's no definition of how much LIDAR you need to do the job, just however much you feel like. Tesla feels like none at all.