Full self driving is either a hard problem - requiring advanced sensors and algorithms; or an easy problem - solvable with cameras and neural nets.
If it's hard, Waymo has an insurmountable lead. If it is easy, everyone can replicate easily what Tesla is doing.
I'm not seeing any strategic moat on Tesla's side
jillesvangurp · 17h ago
I agree with you on the moat side. Tesla is very late to this market. But, I would also argue that Waymo's moat isn't that wide either. Waymo is probably behind their Chinese counterparts that are already operating at a larger scale. More rides. Larger cities. And more of them.
Waymo isn't the only player in the market and unlike Tesla and their Chinese competitors, they aren't really structured as a manufacturing operation. They are modifying consumer cars made by others, probably at great cost. That raises the question about how they are going to scale. Waymo is so far not a scalable operation. They can manage thousands of cars. But can they handle millions? Or make a profit.
Chinese manufacturers, like Tesla, have a ready answer to that by virtue of their proven manufacturing skills. And it seems they so far don't need Waymo's software to get their cars to drive themselves. Tesla is late. But they can build cars at scale. So, I'm not ready to claim they are too late yet.
I think Waymo might have a bright future ahead licensing their platform to manufacturers that are, so far, not showing many signs of having their own version of this.
As for whether this is technically possible or not, I think it's hard to argue that it's not from the back of a self driving taxi. I've not had that pleasure yet (none around near where I live) but I've seen enough to not be that foolish.
Zigurd · 16h ago
Remember 10 foot tall Ivan? Lots of people are thinking about the Chinese similarly. Waymo is probably not behind the Chinese. Huawei has the potential to make good AV software, but they're not caught up to Waymo yet.
AV demos are easy. Scaling AVs is hard if you don't have mature technology. You need way too many remote supervisors to be able to scale, even in China where it's relatively cheap to hire humans to fill the gaps in your AI.
I'm not arguing for complacency. The opposite really. But without realistic assessments, it's easy to get the impression that something is a lost cause or that protecting technologies doesn't matter.
Alive-in-2025 · 13h ago
There's also no moat because zoox and waymo will compete and absorb /reduce any profit that Tesla finds if they ever deliver a working system. waymo has been scaling and has 250k paid rides a week.
It's just not that expensive or hard to add sensors to an existing car and use a new platform. Byd and other Chinese automakers have similar fsd with lidar for $30k or less. It's certainly true that Tesla can crank out a lot more cars as their factory utilization goes down by the day. The millions of immigrants with low cost vehicles driving for Uber are also going to be hard to outcompete.
jonstewart · 16h ago
I wonder how long a Chinese FSD car would last on the BWI parkway coping with Maryland drivers before it pulled over for a good cry.
potato3732842 · 14h ago
It will last until it gets hit for doing something completely and totally legal but stupid, shortsighted and completely devoid of situational awareness or a functioning mental model regarding how drivers act.
gcr · 16h ago
Who would you trust more: an FSD that’s learned to survive Xi’an rush hour and Wenzhou’s interpretive traffic balet, or one that’s learned to panic brake in Palo Alto?
America doesn’t have a monopoly on challenging driving situations, y’know. :-)
op00to · 14h ago
Is it still the case in China that it's cheaper to kill a pedestrian you hit with your car than it is to pay out damages? This might be propaganda!
If so, I wonder if or how that affects the algorithms for pedestrian avoidance. :)
steveBK123 · 16h ago
I had a Tesla 2018-2022, and switched to a competing EV brand at that point. The fact that Tesla has removed sensors since then while competitors have added says a lot.
My car doesn't even have LIDAR but it has 5 radar sensors (corners & forward) so it can do all sorts of neat stuff. For example, it has actual working adjacent lane blindspot detection, which Tesla never dialed in. It also can warn you about oncoming cars at intersections or when backing out of parking spots. It even flashes lights on the doors if you try to open a door when a car is coming in the adjacent lane.
None of that is full self driving but it leaves me wondering what Tesla can ever accomplish with cameras-only.
rimunroe · 16h ago
> For example, it has actual working adjacent lane blindspot detection
Is this not standard? I don’t think my Hyundai Ioniq 5’s blind spot detection has ever failed to warn me about a car in my blind spot
steveBK123 · 16h ago
Tesla literally didn't have it in 2018, and slowly added a poor implementation of it over time. Recall the card hardware had no light not he mirror or in the window sill to alert you, so their early version required looking at the screen.
Current Teslas at least actually have some sort of light in the sill, but again its using cameras only, no radar, sonar, whatever.
gcr · 16h ago
That’s really surprising. Whenever I rent a car, I’ve relied on that feature as standard issue for modern vehicles. It’s like selling a Tesla without an air conditioner.
philistine · 13h ago
Tesla's were ahead of the curve on most things when they first came out. Tesla has refused to join the year model model, so their changes have been haphazard, often regressive, and ultimately very slow.
They still are ahead on some technologies, with the Cybertruck's one and only interesting technology its drive-by-wire system, but the industry has caught up and overtaken Tesla with the relentless progress of the year model.
steveBK123 · 11h ago
That and during the COVID supply chain crisis, Musk clearly shifted to "what can we remove from the car to make it cheaper" mode. This does not drive tech innovation but it does provide profitability. A lot of the decisions re: removal of sensors, stalks, etc flow from that.
And as others have pointed out - not only is Musk insistent on camera-only, but they are not even good state of the art cameras.
For years they were using cameras so bad that even though it provided a "security system recorder" for your car, you couldn't make out license plates most of the time. Kind of useful to be able to read the license plate if you want to actually have anything to give to police when a car hit & runs you.
They only in a recent year updated the cameras to.. still not even 4k.
Sure reading license plates is not the primary purpose of these cameras, but then again .. I'm supposed to trust my life to a car that "sees" with 2010 era iPhone cameras?
Veserv · 9h ago
It is, in fact, illegal for a human to drive in California with vision comparable to the Tesla HW3 cameras.
California minimum vision requirement to operate a vehicle is 20/40 vision [1] which corresponds to a arc resolution of 30 pixels per degree of field of view [2].
HW3 cameras have a horizontal resolution of 1280 pixels [3]. There are three front-facing cameras with field of view (120, 50, 35 degrees) and different focal lengths with optimal distance (60, 150, 250 meters) placed next to each other preventing usage of binocular vision meaning their front-facing cameras are “one-eye” driving.
The cameras have a arc-resolution of ~11 pixels per degree up to 60m, ~25 pixels per degree up to 150m, and ~36 pixels per degree up to 250m. That corresponds to ~20/120 below 60m, ~20/50 at 150m, and ~20/30 at 250m.
In comparison, you are considered legally blind if operating with 20/200 vision which their below 60m camera just barely surpasses. Up to 150m, their cameras fail to meet minimum vision requirements to operate a vehicle in California.
Even on HW4, the 120 degree camera for below 60m, which is the majority of complex high-acuity maneuvers, only has a horizontal resolution of 2896 pixels which is only ~24 pixels per degree corresponding to ~20/50 vision which is below the minimum vision requirements in California.
It's crazy cuz my 2022 BMW has 4k front facing camera (2x the brand new Tesla HW4 resolution), on top of the array of other cameras + 5 radar modules.. and.. they don't pretend they can FSD with that.
My car just has really really good highway ADAS. Which is all that most people need.
rimunroe · 15h ago
Ah. I misunderstood. I thought that by saying it could do all kinds of neat stuff you meant compared to other manufacturers.
boshalfoshal · 12h ago
You are thinking about "hard" and "easy" in the wrong frame of mind. What Tesla does is not "easy" either. Their moat is manufacturing and the R&D they've spent on codesigning their HW and SW stack, and their insane supply chain.
Ford does not suddenly have several million cars with 8-9 cameras to tap into for training data, nor does it have the infrastructure/talent to train models with the data it may get. I think you are underselling the Tesla moat.
Its the same reason why there are only 3-4 "frontier" AI labs, and the rest are just playing catchup, despite a lot of LLM improvements being pretty well researched and open in papers.
rcxdude · 17h ago
The notional moat is that it's easy given enough training data, which Tesla has a significant hoard of. I'm skeptical of this (they've been in that situation for a long time now), but that's the argument.
Zigurd · 16h ago
Tesla has imagery from lo-res cameras. If it turns out that LIDAR and/or radar is needed for AVs, the value of that data is questionable.
onlyrealcuzzo · 16h ago
How hard the problem is may depend on how successful you are at lobbying politicians to force your solution down the public's throat...
tomp · 17h ago
You simply have no idea how AI works.
The key ingredient is data - the bitter lesson. It's not about better algorithms, but simply about algorithms that can process more data efficiently (e.g. transformers).
Tesla is one of the few companies that have a data flywheel - a fleet of (non-self-driving) cars collecting real-world data worldwide all the time at massive scale!
Now that is an insourmountable lead. (Along with good engineering, which, believe it or not, is still a competitive advantage - see e.g. German car companies unable to launch a single useful on-board computer, let alone a software-defined self-driving car.)
Google is one of the few companies that could compete, even without Waymo, because of YouTube.
Spooky23 · 17h ago
That’s the Muskophile babble. Cool story, where is it?
Reality is the Tesla product in this space hasn’t advanced in a decade. They hit a wall. They have promised that the next big thing is just over the next hill, but like the Roadster, hasn’t quite arrived yet.
testing22321 · 13h ago
Did you see last weeks video of FSD in Melbourne, even doing a hook turn?
It’s getting better. Slowly, but surely.
ModernMech · 11h ago
Teslas can be programmed to do a lot of things, that's not a question.
The problem with Teslas is given the deficient sensor suite Musk has insisted on, they can't be programmed to not do things, like not run though a picture of a road like Wile E Coyote.
testing22321 · 9h ago
That video was not FSD, but an old version of “driver assist”.
Given how quickly things are changing, I don’t think it’s useful to say iPhones are crap and always will be because you just got a 6 running super old iOS.
Things are not changing rapidly, this has been a failure mode on Teslas for almost a decade; it's been a problem at least since 2016 when Joshua Brown was decapitated by his Tesla running on AP. People keep saying "no, that was the last version. The new software/hardware version fixes it", but the looney tunes test demonstrates Tesla's sensor stack continues to be fundamentally insufficient.
My chin split open, and I had to get 7 stitches. After the impact, I was hanging upside down watching blood drip down to the glass sun roof, not knowing where I was bleeding from. I unbuckled my seatbelt and sat on the fabric interior in the middle of the two front seats, and saw that my phone’s crash detection went off and told me the first responders were on their way. My whole body was in shock from the incident.
Wally said he was on Tesla FSD v13.2.8 on Hardware 4, Tesla’s latest FSD technology. He requested that Tesla send him the data from his car to better understand what happened.
This is not progress, these cars should not be on the road, they are a menace to society. I would never put my life or my family's lives in the hands of Tesla hardware and software. But let me guess, it'll be fixed in HW5, sure. We all just have to be the beta testers.
infecto · 16h ago
I think the person you are responding too has too much hype but you are equally too negative. Without a doubt Musk overpromises and under delivers constantly, would never trust him or his words. Teslas FSD has been improving nicely especially over the last 3 years. To say it’s stalled for a decade is wrong.
Spooky23 · 9h ago
Sure, they’ve made incremental improvements and have a better Level 2 system, it was promised (and sold as) Level 4 in 2006, and the company claims to be “approaching 4”, and now claims to be shipping robot taxis in production in a few months.
Meanwhile, Google actually delivered the robot taxi in SFO, and it’s amazing.
The sensor strategy really says it all. Why skimp on a minor part of the BOM, which enables a use case with 50-100x value?
The whole Tesla schtick is such a bizarre Lucy and the Football scenario, over and over again.
philistine · 13h ago
The overall point is that automated driving limited to cameras can still improve, but that the threshold is probably very close to the quality, in terms of accident and mistakes, of human driving.
The point is that automated driving with a fleet of sensors can be an order of magnitude safer than just using cameras. The current slow improvements at Tesla do not correlate to an unlimited amount of improvements.
ModernMech · 11h ago
Tesla has been getting closer to FSD the same way climbing up a ladder gets you closer to the moon. They can keep going the direction they're going, but at this point it's clear there's a pretty hard limit to how close they're going to get without significantly rethinking their sensor strategy.
rsfern · 17h ago
This is sort of a misguided take IMO - as if LiDAR and other sensor streams are somehow not large scale data while video is
The bitter lesson isn’t fundamentally even about data. the key ingredient is computation (which does scale with data in modern deep learning). There’s even a whole theme on search outperforming learning - until learning methods changed to leverage computation!
I think in a lot of applications, richer data yields better scaling performance. So the bet for self driving is that the complexity of multiple sensor fusion gives a much better constant factor or exponent in the power law scaling of performance with data and compute
steveBK123 · 16h ago
Right this is why we periodically see Tesla's in FSD go absolutely bezerk in ways more sensor-laden competitors do not appear to.
This was literally 2 months ago, so miss me with the "you're wrong, the latest FSD solved it all this time bro, for real, trust me (the 100th time I've heard this)".
birn559 · 13h ago
Will we be able mid-term to rely on LLMs not hallucinating and causing crashes? Even if the probability is low, the thought that the AI might do something crazy because it's hallucinating is terrifying, so that might be a barrier for adoption. For the same reason, will a (fully) LLM driven car ever be allowed on Western streets? I have serious doubts regarding Europe, at least.
Leno1225 · 11h ago
Large Language Models are never gonna drive cars, they’re plausible text-generating machines, not general-purpose computer intelligences
steveBK123 · 13h ago
There are no LLMs driving cars at the moment
adwn · 16h ago
From the article:
> Despite its name, Full Self-Driving (FSD) is still considered a level 2 driver assist system and is not fully self-driving.
How the hell has Tesla not been sued into the ground by now?
No comments yet
brk · 16h ago
Almost every vehicle has multiple cameras in it today, plus cellular connectivity. Tesla doesn't have a major advantage here. Plus, the data isn't immediately valuable on its own, it needs to be labelled and fed back into the training. This is all very far from a data flywheel analogy.
Musk equates machine vision to human vision, but that is an over simplification, and the best MV algorithms and methods are still miles away from human capabilities. FSD is very reliant on depth perception, which is much easier to solve with sensors other than stereo vision.
I also don't get your German car company statement, Mercedes has been a technology leader in the automotive space across a number of fronts for quite a while.
ricardobayes · 17h ago
It's not that car companies are unable, but rather, margins on new cars are pretty thin so they are trying to save every last penny.
The key ingredient is indeed data, and also, depending on the stack, hardware. If "true" level 5 self-driving is only possible with LIDAR and up-to-date HD maps, then it won't happen for some time. I foresee that unbounded "full" self-driving will either never happen or with severe boundary conditions only.
fsh · 16h ago
Transformer LLMs are good at generating text since they were trained on huge volumes of high quality texts. There is no reason to assume that feeding a neural net with unlabeled garbage data (i.e. video recordings from random cars) is going to lead to anything.
aatd86 · 17h ago
How would youtube help with driving?
tomp · 17h ago
Massive amount of "world" data - enough for Google to successfully train Veo3, a "video generation" model that is extremely good, and evidently includes a decent "world model" i.e. a model that can generate or, more accurately, predict the (short-term) "future" - like "what happens if you let go of an apple" (it falls down) etc.
This kind of "world models" that can understand physics enough to be able to predict short term future (like humans can) are crucial for any kind of real-world AI (i.e. robotics, including self-driving), because they constitute what could be termed as "physical common sense" (that humans have, but also animals, to some degree).
Is it enough for self-driving? No, you also need to understand road rules, communicate with humans (pedestrians and fellow drivers), etc. but it's a good, possibly necessary, step - it allows you to better handle many unpredictable (tail of distribution) situations:
It will turn out there isn't a linear relationship between data volumes and successful self driving. There will be a wall somewhere that prevents it from being achieved regardless of data (assuming current AI techniques and hardware available in the foreseeable future). So what will happen is Musk and co will demand that roads be redesigned to accommodate their tech (e.g. cars network and human drivers are banned from major highways).
tomp · 17h ago
Maybe, but so far we don't really have any examples of good AI (or even barely working AI like GPT4o) that wasn't trained on massive amounts of data.
Is data enough? Maybe not, there's a lot of progress on RL now that can do wonders without even more data. Is data necessary? No evidence to the contrary, yet.
testing22321 · 13h ago
> what will happen is Musk and co will demand that roads be redesigned to accommodate their tech
Every road, in every country on earth?
No, that is never going to happen.
x187463 · 15h ago
FSD is incredible in that it works everywhere, most of the time. The problem is it works most of the time. I do not think the current FSD sensor suite (camera only) will ever be safely capable of true unsupervised FSD. I use FSD every day. I would seriously estimate FSD has put 95% of the miles on my car. I regularly encounter the following:
* treating red lights like stop signs
* completely ignoring flash reds
* stopping at flashing yellows
* ignoring school zones
* driving 10mph under the limit
* driving 20mph over the limit
* ignoring turn only lanes resulting in being dumped into oncoming traffic until it leisurely moves back into the correct lane
* waiting until the last minute to get into the correct lane to make a turn/take an exit
* ignoring signs indicating a lane ends and having to force its way into traffic
* ignoring signs a lane ends and changing into the lane only to have to change back immediately
* pulling out in front of cars, especially at night or other camera obscuring events
* ignoring cars merging onto the highway and blocking them until I take over and get out of the way
I treat it like cruise control. And based on the recent accident during which FSD decided to run itself off a perfectly straight, open, clear rode and into a tree, I keep my hand on the wheel.
Even in a geo-fenced environment, FSD is NOT going to work for unsupervised robo-taxi applications. Musk needs to get out of the way and let them stick LIDARs on these cars.
All that being said, it has also handled really complex driving in/around DC/Baltimore including some tricky merges/transitions that are hard to track as a human driver. It's knowledge of all of the surrounding vehicles allows it to make these maneuvers with far more confidence that I would. It also frees me up to track the bigger picture of traffic while not having to manage my speed/lane position. It's like driving with a friend who's not necessarily the best driver, but is basically competent. Let's just hope it doesn't decide to drive me in to a tree for no particular reason.
op00to · 14h ago
I don't want to be seen as defending Tesla's overpromising and underdelivering, but everything you mentioned I see on a daily basis from all makes and models of motor vehicles on the roadway. Of course, FSD should be better than humans, but it's somewhat humorous to me that right now it's got the same problems as people.
ddalex · 13h ago
It’s not humorous it’s expected - when you train your system on data from millions of average drivers, why would you be surprised that it behaves like an average driver?
And the average human driver is terrible at driving
onlyrealcuzzo · 13h ago
Terrible is subjective.
There's tons of people on the road, so there's plenty of quite bad drivers you'll encounter every day.
But driving is relatively easy. Saying that the average driver is terrible seems like a bit of a stretch.
I'll give you that there's millions of terrible drivers. That doesn't make the average terrible, though.
op00to · 12h ago
I agree - I think the average driver is average. Not great, not bad! That's why they're average, right? If the average driver were terrible, then terrible would be average, and we'd be back to where we started.
The fact that we're all not constantly getting smashed into means that on the whole drivers are competent.
const_cast · 3h ago
In terms of absolute safety the average driver is quite terrible. Following too closely, swerving in and out of lanes, not stopping at stop signs, driving 10 over. I see this daily.
On a typical Texas highway not only is everyone driving 10 over, and the typical following distance is about ~1 car length. At 80 miles an hour. Lord have mercy if anything happens on the road.
bhouston · 17h ago
Maybe Elon isn’t Stark or Scott and the world is more complex than simple analogies.
Elon is a great promoter and does have visionary ideas and has often been able to execute them when he is paying attention.
He is also running a harem to get +100 or more children, seems racist for many reasons, over promises (but this comes with being a promoter), and has biased X/twitter to favour whatever ideas he currently likes.
krapp · 17h ago
I think it's more accurate to say other people have been able to execute on his ideas (which aren't even entirely his) when he isn't allowed to interfere.
Cybertruck and X are what you get when Elon Musk has his limiters off. SpaceX and Tesla are what you get when a competent team knows how to manage him and his ego.
bhouston · 16h ago
Yeah other people have executed but this actually true of anyone who starts companies and employs tons of people. He isn’t a sole practitioner in a lab.
Also the ideas are not completely his mostly because this isn’t novel basic research. This is applied engineering and thus one needs to grab onto ideas who are ready for implementation at scale. This means they are well known ideas.
I don’t disagree that Elon messes things a lot, especially recently. DOGE cuts are being undone via court ruling nearly as fast as he did them.
krapp · 16h ago
There's a greater tendency to treat Elon like a Tony Stark archetype whose singular genius and skill are responsible for his accomplishments than there is for other CEOs. No one would claim that Steve Jobs personally invented portable music players, or that Jeff Bezos invented internet commerce, but they will claim that there would be no electric cars or private space travel if not for Elon Musk, even given the same employees and funding.
bhouston · 15h ago
Steve Jobs is responsible for the iPhone, the iPod and the iPad because he allocated resource to these initiatives. People did give him credit for these. He wasn't the first to think of the smartphone but he did create the iconic realization that also got mass adoption.
Jeff Bezos does get credit for Amazon's work like Amazon, AWS and Kindle among others. Because he did allocate the resources to this and manage the company in this direction.
Coeur · 14h ago
"Lane-departure detection, front-collision avoidance, adaptive cruise control, emergency break assist—all of these features are powered by lidar."
That's not true. Lane-departure detection uses mostly radar and cameras, adaptive cruise control usually uses radar and sometimes cameras, and emergency break assist is often just a brake pedal sensor. Front-collision avoidance uses lidar in some cars from the last 2 years.
Lidar might very much be needed for full self driving, but it not yet used in many cars on the road today.
The writer does not have all his facts straight.
walthamstow · 17h ago
The thing about Tesla having more cars on the road is obviously true, but Google Street View captures my house in London at a decent frequency, about twice a year, so those cars are driving around capturing data all the time in major cities.
I assume they're capturing more than just images, does anyone know for a fact?
aspenmayer · 17h ago
> I assume they're capturing more than just images, does anyone know for a fact?
I'm not sure what they're doing now, but I assume that they're fingerprinting everything by MAC addresses and SSIDs and other identifiers, but for their sake I hope they have stopped intercepting unencrypted wifi traffic, which is apparently a thing that was proven at court that Google Street View cars did do.
> Joffe v. Google, Inc. is a federal lawsuit between Ben Joffe and Google, Inc. Joffe claimed that Google broke one of the Wiretap Act segments when they intruded on the seemingly "public" wireless networks of private homes through their Street View application. Although Google tried to appeal their case multiple times, the courts favored Joffe's argument. Ultimately the Supreme Court declined to take the case, affirming the decision by the United States Court of Appeals for the Ninth Circuit that the Wiretap Act covers the interception of unencrypted Wi-Fi communications.
aspenmayer · 17h ago
Scooped by ~2 min by users xnx and prepend, but I like to think my flavor text was worth the wait. I'm likely biased on that point.
Thanks! I thought the 3D models in Google Maps were just Extremely Clever Math. Lidar and Extremely Clever Math makes more sense.
prepend · 17h ago
They were sued, and lost, for capturing and analyzing data off all the WiFi networks they drove by. [0] I assume they are recording all the metadata and data that they can to stay compliant with this.
My guess is they have a list of every MAC address of every device they can find, geolocated. And then they match that to data from all those apps that ask to discover devices on my local network. Now they know how old my tv and lightbulbs are, etc etc
Wikipedia urls ending in punctuation are unreliably broken or not depending on caching and the platform, so if you put a # on the end to escape it, it fixes it, without having to worry about percent encoding.
Timon3 · 13h ago
That's a very useful tip, thank you!
AStonesThrow · 10h ago
The problem here is that the HN URL encoding interpreted the period as the end-of-sentence rather than as part of the URL. It would be simply bizarre for any browser or web server to choke on a perfectly legal dot to end a URL.
https://en.wikipedia.org/wiki/Man_or_Astro-man%3F is another story -- since "?" is a reserved character in URLs for CGI queries. Enter the question mark anyway, and the article comes up! Why? There's a redirect without it!
For Wikipedia's gory details on technical restrictions for article titles: (note that HN properly parses this article title ending in a right-paren)
> The problem here is that the HN URL encoding interpreted the period as the end-of-sentence rather than as part of the URL.
That’s the first problem.
> It would be simply bizarre for any browser or web server to choke on a perfectly legal dot to end a URL.
I agree? I never said anything like this. My original comment was:
> Wikipedia urls ending in punctuation are unreliably broken or not depending on caching and the platform, so if you put a # on the end to escape it, it fixes it, without having to worry about percent encoding.
I mentioned the platform specifically, which in this context could be either the server context or client context. You mentioned server/client context, as in what HN serves the user or vice versa. I mentioned that and client context inclusively. If you’re correcting me, assume I need you to show my error.
> Enter the question mark anyway, and the article comes up! Why? There's a redirect without it!
It seemed that the issue was incorrectly diagnosed, and MediaWiki or Wikipedia was being blamed for the error, and that you had also proposed a rather strange workaround for it. The issue could be solved completely within the context of Hacker News post markup, as linked.
If you put a hashtag at the end of a Wikipedia URL, then I suppose it works, until the URL already has a hashtag in it, because these are used for section headings. It's not called "escaping" anything, it's just... an empty URI fragment: a link to the top of the article?
There is also nothing preventing a Wikipedia editor from creating a redirect from the title y'all linked. In fact it's a perfectly fine idea for a redirect. The fact is that the canonical title is in US English, and in US English, "Inc." takes a period as an abbreviation.
There's nothing wrong with your workaround or your percent-encodings to escape some dubious glyph, but I hoped to clarify things and derail the thread further on pedantic technicalities. Thank you for coming to my TED talk.
aspenmayer · 2h ago
> It's not called "escaping" anything, it's just... an empty URI fragment: a link to the top of the article?
It’s an escape from pedantry.
I appreciate your gentle needling, as imprecision in my words reflects an imprecision in my rhetoric, making it vulnerable to nitpicking. It’s okay to be wrong if it allows me to make a larger point in favor of my position, but at a cost to readers’ time and patience.
Thanks for your close reading and feedback, it helps.
kklisura · 17h ago
> For years, Elon Musk has been promising that Teslas will operate completely autonomously in “Full Self Driving” (FSD) mode. And when I say years, I mean years...
> Musk was sued by a group of shareholders who claimed that the Tesla boss had defrauded them with his lofty claims about the capabilities of Tesla’s advanced driver assistance tech, including Autopilot and Full Self-Driving. [1]
> After hearing the case, U.S. district Judge Araceli Martinez-Olguin claimed that the plaintiffs had failed to prove that the Tesla boss had acted with “deliberate recklessness,” .... That’s because Musk’s lawyers didn’t decide to argue that Tesla’s claims about its self-driving abilities were perfectly accurate. Instead, the legal team representing Musk basically said that nobody would realistically believe what Musk was banging on about... [1]
> In a mind-numbing statement, Musk’s lawyers argue that his claims about Tesla Autopilot safety were “vague statements of corporate optimism are not objectively verifiable” [2]
> The lawyers even argued, successfully, that “no reasonable investor would rely” on many of the alleged misleading statements because they are “mere puffing” [2]
To save your time, this is another article that focusses on the author's personal (and political) affectations with Elon Musk, as opposed to being a substantive technical review of FSD.
FTA:
> we’ve turned over large portions of America to insanely rich people who have no idea what they’re doing
> It’s about oligarchy
> What happened? Elon Musk was stupid. That’s what happened.
> the stock market pretend that Musk hasn’t failed
Anyone who's seen recent FSD videos on YouTube will understand the incredible progress that has been made, and how FSD is a real-world solution whereas Waymo (geo fences, "safety drivers" etc) isn't comparable
sega_sai · 16h ago
Are you saying that the factual content of the article is incorrect?
If not, yes the author may have views on Musk, but that is irrelevant in the context of the article.
epgui · 16h ago
Look, I can see just as well as anyone else that Musk is wildly off on his timelines, and I really don't like the guy either. That said, the article is full of facts that are decontextualized or presented in a misleading/misled manner.
zippothrowaway · 16h ago
Becasue youtube videos now pass as data rather than self-interested anecdotes.
What a sad world we live in.
epgui · 16h ago
> self-interested anecdotes
It's more than that-- many people are spending a lot of time just testing FSD, filming it, actively looking for failure modes, and being quite methodical about it. We don't have the same data Tesla has about its fleet obviously, but we can all go and observe the progress being made.
cbeach · 16h ago
Hundreds of thousands of hours of people physically demonstrating FSD technology in the real world, with production cars and real users (who are not employed by the manufacturer) is meaningful evidence, even if it happens to be published on YouTube
jp57 · 16h ago
“Of course we already have full self-driving technology. It’s called Waymo…”
Is it FSD, though? Understand that Waymo has remote tele-operators, who can intercede if necessary. How many are there? How many cars does each handle? I’m not sure the company makes that public.
420official · 13h ago
Waymo is firmly SAE self driving level 4 "automated driving" vs Tesla at level 2 "driver support".
You're correct that waymo has operators that can jump in in the event something unusual happens, but the vast majority of the time the vehicle is operating autonomously within their defined area. All that is described by SAE level 4 where no human is driving while automatic driving is engaged.
Google still carries trauma from buying Motorola, and all of Motorola's employees. They would never have expanded Waymo if it meant hiring buildings full of remote operators. And they've demonstrated that that's not how it works anyway.
ddalex · 13h ago
They demoed what the operators do, they can only review the data and select one of a few possible options, they never take control of the car
If it's hard, Waymo has an insurmountable lead. If it is easy, everyone can replicate easily what Tesla is doing.
I'm not seeing any strategic moat on Tesla's side
Waymo isn't the only player in the market and unlike Tesla and their Chinese competitors, they aren't really structured as a manufacturing operation. They are modifying consumer cars made by others, probably at great cost. That raises the question about how they are going to scale. Waymo is so far not a scalable operation. They can manage thousands of cars. But can they handle millions? Or make a profit.
Chinese manufacturers, like Tesla, have a ready answer to that by virtue of their proven manufacturing skills. And it seems they so far don't need Waymo's software to get their cars to drive themselves. Tesla is late. But they can build cars at scale. So, I'm not ready to claim they are too late yet.
I think Waymo might have a bright future ahead licensing their platform to manufacturers that are, so far, not showing many signs of having their own version of this.
As for whether this is technically possible or not, I think it's hard to argue that it's not from the back of a self driving taxi. I've not had that pleasure yet (none around near where I live) but I've seen enough to not be that foolish.
AV demos are easy. Scaling AVs is hard if you don't have mature technology. You need way too many remote supervisors to be able to scale, even in China where it's relatively cheap to hire humans to fill the gaps in your AI.
I'm not arguing for complacency. The opposite really. But without realistic assessments, it's easy to get the impression that something is a lost cause or that protecting technologies doesn't matter.
It's just not that expensive or hard to add sensors to an existing car and use a new platform. Byd and other Chinese automakers have similar fsd with lidar for $30k or less. It's certainly true that Tesla can crank out a lot more cars as their factory utilization goes down by the day. The millions of immigrants with low cost vehicles driving for Uber are also going to be hard to outcompete.
America doesn’t have a monopoly on challenging driving situations, y’know. :-)
If so, I wonder if or how that affects the algorithms for pedestrian avoidance. :)
My car doesn't even have LIDAR but it has 5 radar sensors (corners & forward) so it can do all sorts of neat stuff. For example, it has actual working adjacent lane blindspot detection, which Tesla never dialed in. It also can warn you about oncoming cars at intersections or when backing out of parking spots. It even flashes lights on the doors if you try to open a door when a car is coming in the adjacent lane.
None of that is full self driving but it leaves me wondering what Tesla can ever accomplish with cameras-only.
Is this not standard? I don’t think my Hyundai Ioniq 5’s blind spot detection has ever failed to warn me about a car in my blind spot
Current Teslas at least actually have some sort of light in the sill, but again its using cameras only, no radar, sonar, whatever.
They still are ahead on some technologies, with the Cybertruck's one and only interesting technology its drive-by-wire system, but the industry has caught up and overtaken Tesla with the relentless progress of the year model.
And as others have pointed out - not only is Musk insistent on camera-only, but they are not even good state of the art cameras.
For years they were using cameras so bad that even though it provided a "security system recorder" for your car, you couldn't make out license plates most of the time. Kind of useful to be able to read the license plate if you want to actually have anything to give to police when a car hit & runs you.
They only in a recent year updated the cameras to.. still not even 4k.
Sure reading license plates is not the primary purpose of these cameras, but then again .. I'm supposed to trust my life to a car that "sees" with 2010 era iPhone cameras?
California minimum vision requirement to operate a vehicle is 20/40 vision [1] which corresponds to a arc resolution of 30 pixels per degree of field of view [2].
HW3 cameras have a horizontal resolution of 1280 pixels [3]. There are three front-facing cameras with field of view (120, 50, 35 degrees) and different focal lengths with optimal distance (60, 150, 250 meters) placed next to each other preventing usage of binocular vision meaning their front-facing cameras are “one-eye” driving.
The cameras have a arc-resolution of ~11 pixels per degree up to 60m, ~25 pixels per degree up to 150m, and ~36 pixels per degree up to 250m. That corresponds to ~20/120 below 60m, ~20/50 at 150m, and ~20/30 at 250m.
In comparison, you are considered legally blind if operating with 20/200 vision which their below 60m camera just barely surpasses. Up to 150m, their cameras fail to meet minimum vision requirements to operate a vehicle in California.
Even on HW4, the 120 degree camera for below 60m, which is the majority of complex high-acuity maneuvers, only has a horizontal resolution of 2896 pixels which is only ~24 pixels per degree corresponding to ~20/50 vision which is below the minimum vision requirements in California.
[1] https://eyewiki.org/Driving_Restrictions_per_State
[2] https://en.wikipedia.org/wiki/Visual_acuity
[3] https://www.blogordie.com/2023/09/hw4-tesla-new-self-driving...
My car just has really really good highway ADAS. Which is all that most people need.
Ford does not suddenly have several million cars with 8-9 cameras to tap into for training data, nor does it have the infrastructure/talent to train models with the data it may get. I think you are underselling the Tesla moat.
Its the same reason why there are only 3-4 "frontier" AI labs, and the rest are just playing catchup, despite a lot of LLM improvements being pretty well researched and open in papers.
The key ingredient is data - the bitter lesson. It's not about better algorithms, but simply about algorithms that can process more data efficiently (e.g. transformers).
Tesla is one of the few companies that have a data flywheel - a fleet of (non-self-driving) cars collecting real-world data worldwide all the time at massive scale!
Now that is an insourmountable lead. (Along with good engineering, which, believe it or not, is still a competitive advantage - see e.g. German car companies unable to launch a single useful on-board computer, let alone a software-defined self-driving car.)
Google is one of the few companies that could compete, even without Waymo, because of YouTube.
Reality is the Tesla product in this space hasn’t advanced in a decade. They hit a wall. They have promised that the next big thing is just over the next hill, but like the Roadster, hasn’t quite arrived yet.
It’s getting better. Slowly, but surely.
The problem with Teslas is given the deficient sensor suite Musk has insisted on, they can't be programmed to not do things, like not run though a picture of a road like Wile E Coyote.
Given how quickly things are changing, I don’t think it’s useful to say iPhones are crap and always will be because you just got a 6 running super old iOS.
Things are not changing rapidly, this has been a failure mode on Teslas for almost a decade; it's been a problem at least since 2016 when Joshua Brown was decapitated by his Tesla running on AP. People keep saying "no, that was the last version. The new software/hardware version fixes it", but the looney tunes test demonstrates Tesla's sensor stack continues to be fundamentally insufficient.
And this is why: https://insideevs.com/news/658439/elon-musk-overruled-tesla-...
Musk thinks he knows better than his engineers how to solve this. And no, I don't consider Musk to be an engineer, he's a showman.
This is literally the definition of progress.
Tesla is currently developing HW5 which has 10x the inference capability of HW3. More progress.
https://electrek.co/2025/05/23/tesla-full-self-driving-veers...
This is not progress, these cars should not be on the road, they are a menace to society. I would never put my life or my family's lives in the hands of Tesla hardware and software. But let me guess, it'll be fixed in HW5, sure. We all just have to be the beta testers.Meanwhile, Google actually delivered the robot taxi in SFO, and it’s amazing.
The sensor strategy really says it all. Why skimp on a minor part of the BOM, which enables a use case with 50-100x value?
The whole Tesla schtick is such a bizarre Lucy and the Football scenario, over and over again.
The point is that automated driving with a fleet of sensors can be an order of magnitude safer than just using cameras. The current slow improvements at Tesla do not correlate to an unlimited amount of improvements.
The bitter lesson isn’t fundamentally even about data. the key ingredient is computation (which does scale with data in modern deep learning). There’s even a whole theme on search outperforming learning - until learning methods changed to leverage computation!
I think in a lot of applications, richer data yields better scaling performance. So the bet for self driving is that the complexity of multiple sensor fusion gives a much better constant factor or exponent in the power law scaling of performance with data and compute
https://electrek.co/2025/05/23/tesla-full-self-driving-veers...
This was literally 2 months ago, so miss me with the "you're wrong, the latest FSD solved it all this time bro, for real, trust me (the 100th time I've heard this)".
> Despite its name, Full Self-Driving (FSD) is still considered a level 2 driver assist system and is not fully self-driving.
How the hell has Tesla not been sued into the ground by now?
No comments yet
Musk equates machine vision to human vision, but that is an over simplification, and the best MV algorithms and methods are still miles away from human capabilities. FSD is very reliant on depth perception, which is much easier to solve with sensors other than stereo vision.
I also don't get your German car company statement, Mercedes has been a technology leader in the automotive space across a number of fronts for quite a while.
The key ingredient is indeed data, and also, depending on the stack, hardware. If "true" level 5 self-driving is only possible with LIDAR and up-to-date HD maps, then it won't happen for some time. I foresee that unbounded "full" self-driving will either never happen or with severe boundary conditions only.
This kind of "world models" that can understand physics enough to be able to predict short term future (like humans can) are crucial for any kind of real-world AI (i.e. robotics, including self-driving), because they constitute what could be termed as "physical common sense" (that humans have, but also animals, to some degree).
Is it enough for self-driving? No, you also need to understand road rules, communicate with humans (pedestrians and fellow drivers), etc. but it's a good, possibly necessary, step - it allows you to better handle many unpredictable (tail of distribution) situations:
https://www.reddit.com/r/SelfDrivingCars/comments/1g75ftb/wa...
https://www.youtube.com/watch?v=-tJH8hED11I
https://m.youtube.com/watch?v=MejbOFk7H6c&pp=ygUJb2sgZ28gY2F...
Is data enough? Maybe not, there's a lot of progress on RL now that can do wonders without even more data. Is data necessary? No evidence to the contrary, yet.
Every road, in every country on earth?
No, that is never going to happen.
* treating red lights like stop signs
* completely ignoring flash reds
* stopping at flashing yellows
* ignoring school zones
* driving 10mph under the limit
* driving 20mph over the limit
* ignoring turn only lanes resulting in being dumped into oncoming traffic until it leisurely moves back into the correct lane
* waiting until the last minute to get into the correct lane to make a turn/take an exit
* ignoring signs indicating a lane ends and having to force its way into traffic
* ignoring signs a lane ends and changing into the lane only to have to change back immediately
* pulling out in front of cars, especially at night or other camera obscuring events
* ignoring cars merging onto the highway and blocking them until I take over and get out of the way
I treat it like cruise control. And based on the recent accident during which FSD decided to run itself off a perfectly straight, open, clear rode and into a tree, I keep my hand on the wheel.
Even in a geo-fenced environment, FSD is NOT going to work for unsupervised robo-taxi applications. Musk needs to get out of the way and let them stick LIDARs on these cars.
All that being said, it has also handled really complex driving in/around DC/Baltimore including some tricky merges/transitions that are hard to track as a human driver. It's knowledge of all of the surrounding vehicles allows it to make these maneuvers with far more confidence that I would. It also frees me up to track the bigger picture of traffic while not having to manage my speed/lane position. It's like driving with a friend who's not necessarily the best driver, but is basically competent. Let's just hope it doesn't decide to drive me in to a tree for no particular reason.
And the average human driver is terrible at driving
There's tons of people on the road, so there's plenty of quite bad drivers you'll encounter every day.
But driving is relatively easy. Saying that the average driver is terrible seems like a bit of a stretch.
I'll give you that there's millions of terrible drivers. That doesn't make the average terrible, though.
The fact that we're all not constantly getting smashed into means that on the whole drivers are competent.
On a typical Texas highway not only is everyone driving 10 over, and the typical following distance is about ~1 car length. At 80 miles an hour. Lord have mercy if anything happens on the road.
Elon is a great promoter and does have visionary ideas and has often been able to execute them when he is paying attention.
He is also running a harem to get +100 or more children, seems racist for many reasons, over promises (but this comes with being a promoter), and has biased X/twitter to favour whatever ideas he currently likes.
Cybertruck and X are what you get when Elon Musk has his limiters off. SpaceX and Tesla are what you get when a competent team knows how to manage him and his ego.
Also the ideas are not completely his mostly because this isn’t novel basic research. This is applied engineering and thus one needs to grab onto ideas who are ready for implementation at scale. This means they are well known ideas.
I don’t disagree that Elon messes things a lot, especially recently. DOGE cuts are being undone via court ruling nearly as fast as he did them.
Jeff Bezos does get credit for Amazon's work like Amazon, AWS and Kindle among others. Because he did allocate the resources to this and manage the company in this direction.
That's not true. Lane-departure detection uses mostly radar and cameras, adaptive cruise control usually uses radar and sometimes cameras, and emergency break assist is often just a brake pedal sensor. Front-collision avoidance uses lidar in some cars from the last 2 years.
Lidar might very much be needed for full self driving, but it not yet used in many cars on the road today.
The writer does not have all his facts straight.
I assume they're capturing more than just images, does anyone know for a fact?
I'm not sure what they're doing now, but I assume that they're fingerprinting everything by MAC addresses and SSIDs and other identifiers, but for their sake I hope they have stopped intercepting unencrypted wifi traffic, which is apparently a thing that was proven at court that Google Street View cars did do.
https://en.wikipedia.org/wiki/Joffe_v._Google,_Inc%2E
> Joffe v. Google, Inc. is a federal lawsuit between Ben Joffe and Google, Inc. Joffe claimed that Google broke one of the Wiretap Act segments when they intruded on the seemingly "public" wireless networks of private homes through their Street View application. Although Google tried to appeal their case multiple times, the courts favored Joffe's argument. Ultimately the Supreme Court declined to take the case, affirming the decision by the United States Court of Appeals for the Ninth Circuit that the Wiretap Act covers the interception of unencrypted Wi-Fi communications.
My guess is they have a list of every MAC address of every device they can find, geolocated. And then they match that to data from all those apps that ask to discover devices on my local network. Now they know how old my tv and lightbulbs are, etc etc
[0] https://en.wikipedia.org/wiki/Joffe_v._Google,_Inc.
Wikipedia urls ending in punctuation are unreliably broken or not depending on caching and the platform, so if you put a # on the end to escape it, it fixes it, without having to worry about percent encoding.
Per the HN formatting documentation: https://news.ycombinator.com/formatdoc
<https://en.wikipedia.org/wiki/Joffe_v._Google,_Inc.>
https://en.wikipedia.org/wiki/Man_or_Astro-man%3F is another story -- since "?" is a reserved character in URLs for CGI queries. Enter the question mark anyway, and the article comes up! Why? There's a redirect without it!
For Wikipedia's gory details on technical restrictions for article titles: (note that HN properly parses this article title ending in a right-paren)
https://en.wikipedia.org/wiki/Wikipedia:Naming_conventions_(...
That’s the first problem.
> It would be simply bizarre for any browser or web server to choke on a perfectly legal dot to end a URL.
I agree? I never said anything like this. My original comment was:
> Wikipedia urls ending in punctuation are unreliably broken or not depending on caching and the platform, so if you put a # on the end to escape it, it fixes it, without having to worry about percent encoding.
I mentioned the platform specifically, which in this context could be either the server context or client context. You mentioned server/client context, as in what HN serves the user or vice versa. I mentioned that and client context inclusively. If you’re correcting me, assume I need you to show my error.
> Enter the question mark anyway, and the article comes up! Why? There's a redirect without it!
That’s the second problem - the site in question as typed - there is no redirect from https://en.wikipedia.org/wiki/Joffe_v._Google,_Inc to the version of the article that has a period, which is https://en.wikipedia.org/wiki/Joffe_v._Google,_Inc%2E if you use percent encoding.
I’m not sure what the point of your comment is.
If you put a hashtag at the end of a Wikipedia URL, then I suppose it works, until the URL already has a hashtag in it, because these are used for section headings. It's not called "escaping" anything, it's just... an empty URI fragment: a link to the top of the article?
There is also nothing preventing a Wikipedia editor from creating a redirect from the title y'all linked. In fact it's a perfectly fine idea for a redirect. The fact is that the canonical title is in US English, and in US English, "Inc." takes a period as an abbreviation.
There's nothing wrong with your workaround or your percent-encodings to escape some dubious glyph, but I hoped to clarify things and derail the thread further on pedantic technicalities. Thank you for coming to my TED talk.
It’s an escape from pedantry.
I appreciate your gentle needling, as imprecision in my words reflects an imprecision in my rhetoric, making it vulnerable to nitpicking. It’s okay to be wrong if it allows me to make a larger point in favor of my position, but at a cost to readers’ time and patience.
Thanks for your close reading and feedback, it helps.
> Musk was sued by a group of shareholders who claimed that the Tesla boss had defrauded them with his lofty claims about the capabilities of Tesla’s advanced driver assistance tech, including Autopilot and Full Self-Driving. [1]
> After hearing the case, U.S. district Judge Araceli Martinez-Olguin claimed that the plaintiffs had failed to prove that the Tesla boss had acted with “deliberate recklessness,” .... That’s because Musk’s lawyers didn’t decide to argue that Tesla’s claims about its self-driving abilities were perfectly accurate. Instead, the legal team representing Musk basically said that nobody would realistically believe what Musk was banging on about... [1]
> In a mind-numbing statement, Musk’s lawyers argue that his claims about Tesla Autopilot safety were “vague statements of corporate optimism are not objectively verifiable” [2]
> The lawyers even argued, successfully, that “no reasonable investor would rely” on many of the alleged misleading statements because they are “mere puffing” [2]
Never trust Elon's statements.
[1] https://qz.com/tesla-robotaxi-lawsuit-elon-musk-lawyers-clai...
[2] https://electrek.co/2024/10/02/elon-musk-celebrates-winning-...
No comments yet
https://m.youtube.com/watch?v=LpxT9TLGoLI&pp=ygUScGVuZWxvcGU...
FTA:
> we’ve turned over large portions of America to insanely rich people who have no idea what they’re doing
> It’s about oligarchy
> What happened? Elon Musk was stupid. That’s what happened.
> the stock market pretend that Musk hasn’t failed
Anyone who's seen recent FSD videos on YouTube will understand the incredible progress that has been made, and how FSD is a real-world solution whereas Waymo (geo fences, "safety drivers" etc) isn't comparable
What a sad world we live in.
It's more than that-- many people are spending a lot of time just testing FSD, filming it, actively looking for failure modes, and being quite methodical about it. We don't have the same data Tesla has about its fleet obviously, but we can all go and observe the progress being made.
Is it FSD, though? Understand that Waymo has remote tele-operators, who can intercede if necessary. How many are there? How many cars does each handle? I’m not sure the company makes that public.
https://www.sae.org/blog/sae-j3016-update