- The scary effect music shows it's intended as a hit piece.
- The constant intermixing of Autopilot and Full Self Driving, two very different things.
- Implying that driving just based on visual input is unsafe, when that is how all humans drive.
Of course, that the video is an unserious hit piece doesn't mean these Tesla features are safe. But I need something more serious to be convinced.
opello · 20h ago
> Implying that driving just based on visual input is unsafe, when that is how all humans drive.
Except that's not at all how humans drive. Sure, there's visual input, but human vision is largely based on expectations. You see what you expect and ignore a lot of things. The predictive engine of the brain does a lot more than evaluate the present environment. This is both good and bad insofar as safety is concerned.
BurningFrog · 19h ago
Of course Teslas also have software interpreting the video streams and maintaining a model of the surroundings at all times.
Not really clear what the argument is meant to be here...
opello · 18h ago
The point is that the automation system is better in some ways and far, far worse in others. And of course to also highlight that reducing the human in the loop to visual sensors is unreasonably reductive.
An interesting question might be if the automation system can be evaluated against the human perceptual system, or amalgamation of systems. This seems like an exceedingly difficult premise to evaluate though, given the varied and dynamic nature of the real-world driving environment.
foobarbecue · 11h ago
The point being made above was that Teslas lack radar and lidar, but so do humans.
Your argument that human processing is superior to Tesla processing seems orthogonal (that's about how the data is handled, not about what input data is available).
opello · 1h ago
The original claim was essentially that humans drive solely based on visual input. That perspective ignores how much past experience and expectation affect perception and decision making.
I think I would concede the argument that it's "just processing" once someone has recreated the processing in an automation system. That seems unlikely. Or when an automation system outperforms a human in every situation one might encounter in a chaotic driving environment.
jmye · 18h ago
That the car’s model sucks? That saying “yeah but that’s how people, who are notoriously, horribly unsafe drivers do it” is meaningless, cheerleading nonsense?
Should I go on? This is trivial stuff, man.
xboxnolifes · 10h ago
Well, that wasnt the claim made.
lttlrck · 20h ago
I am not convinced that a camera fixed in place is equivalent to eyeballs with 6 degrees of freedom. That freedom significantly boosts the available parallax for depth and distance perception something fixed in place cameras lack.
woodrowbarlow · 20h ago
nor that current algorithms come close to matching our biological ability to infer 3-dimensional information from the sensor data
BurningFrog · 19h ago
Teslas have 8 cameras monitoring the full 360° surroundings. This enables far better world modeling than our two eyes 3 inches apart.
ahahahahah · 14h ago
Tesla can't even figure out how to detect rain, they very clearly do not have better world modeling than our two eyes. A fucking two year old child can detect rain better than them.
No comments yet
Veserv · 19h ago
> Implying that driving just based on visual input is unsafe, when that is how all humans drive.
Implying that driving just based on single-eye 20/120 vision [1] is safe, when that is in fact illegal.
You appear to have a total misunderstanding of what I said and then went on to engage in a bad-faith accusation.
First of all, it is convenient that you ignored the part about 20/120 vision which is below the legal minimum requirements for driving and thus no human is legally allowed to drive with such poor vision.
As for your misunderstanding, a human has two eyes placed in a binocular orientation on a swivel mount allowing for binocular depth perception when the head is pointed in a direction. That is how humans drive.
Except for the forward direction, the Tesla cameras have largely non-overlapping fields of view. As they are not mounted on a swivel, they only have single-eye vision in any non-forward direction. In the forward direction, the Tesla cameras do have overlapping fields of view, but are not only too close to support binocular perception, they also have different focal distances preventing binocular perception. As such, Teslas only have single-eye vision in the forward direction as well. So, they only have single-eye perception for objects.
Adding on their visual acuity far below minimum requirements for legal human driving, it is safe to say that their sensors are, in fact, not at all how humans drive. Any manufacturer who would claim as such is making false claims willfully and knowingly.
OsrsNeedsf2P · 21h ago
I was waiting for the video to share data on the safety of Tesla self driving versus human drivers, but... No. I guess those numbers wouldn't have supported the argument.
FireBeyond · 20h ago
Oh, you mean the data (that Tesla chooses to release, anyway) that compares the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?
Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries?
A great mystery is how Tesla is avoiding culpability and liability for gross lying and mortal harms caused by the product, and instead being rewarded with fantastic amounts of money!
Personal computer appliance users truly blame themselves for the failures of the devices, no matter the degree of wreckage and injury caused by bad design, while cheering the men (unbearable a*holes) who foist malfunctional and dangerous tech upon them.
The captains of this industry are notorious for refusing to take responsibility for their mistakes and actively planning to deploy tech they themselves claim is hazardous, while being continually cheered by investors hungry to make a killing.
Tesla is a case study for the world about the hazards of California Ideology libertarianism and the precedence of greed over personal responsibility and justice.
Since Ronald Reagan, personal responsibility has never been a libertarian (Republican) trait. It's always "Oops I did it again!" and "I forgot!"
No surprise the "trolley problem" is the signature thought experiment for the industry as its technocrats constantly hunt for ways to escape responsibility and seek unearned profits.
With the woeful performance of Musk's cars and robots, DOGE fiasco, Federal schedule drug habit, goonerism, and inability to maintain personal relationships, Musk's plans for a mars adventure are psychotic.
But what fun to watch!
dragonwriter · 21h ago
> A great mystery is how Tesla is avoiding culpability and liability
The civil justice system is slow (and COVID made the whole justice system even slower for quite a while), but aside from that, maybe they're not? A quarter billion dollar verdict for a 2019 crash was recently returned against them.
Yep, it's quite incredible that they released this faked video, quite a lot of people died in various accidents involving Autopilot / FSD where they clearly felt for the marketing, then Tesla guy who made it admitted it was faked and even crashed at some point while filming, and now instead of being in jail, he is ... the head of Tesla Autopilot program.
jojobas · 21h ago
1) Nikola didn't kill anyone
2) Tesla's fraud eclipses Nikola's few billion amateur hour.
mgh2 · 21h ago
To be fair, it was “staged”, not completely fake; but yes, the false advertising killed many
rsynnott · 15h ago
That's really stretching the definition of 'staged', tbh.
> Just want to be absolutely clear that everyone’s top priority is achieving an amazing Autopilot demo drive. Since this is a demo, it is fine to hardcode some of it, since we will backfill with production code later in an OTA update. I will be telling the world that this is what the car will be able to do, not that it can do this upon receipt
You could maybe call it staged if they were, say, very selective about the conditions and course, but within those constraints it was actually doing what they said it did. That doesn't seem to have been the case.
notjoemama · 21h ago
I liked this YouTube comment: "Never before have I seen a CEO get away with straight up lying to investors so often." So...like, you're kinda young then, right? Basically you're unaware of private equity, the 2008 housing crisis and occupy Wall Street. I agree what he's doing is wrong, really wrong. I'm just sick of the obvious partisan schadenfreude hyperbole. Was an article about Chorus on hacker news? I'll search, maybe I missed it by one day. Such is social media.
You won't believe what happens next! (Number 3 will blow you away)
calmbonsai · 21h ago
Don't get me wrong, I love Tesla design. I just never understood why anyone would deliberately inject additional possible critical safety faults into their driving experience.
I was glad when they started charging for it, 'cuz it just meant fewer dangerous Teslas on the road.
I have no doubt we'll get to full autopilot...eventually, and we've "gotten there" already with adaptive cruise control, BUT in the interim, if you can't pay full attention while driving you shouldn't be driving.
typewithrhythm · 21h ago
There is this big hump of safety, where adding assist features causes (some portion of) average drivers to become inattentive, and decreases overall safety.
Things start to improve again once the features get even more capable.
IMO Tesla fsd is well past the hump compared to most current cars with acc+lateral.
senordevnyc · 21h ago
We have gotten there. Waymo does hundreds of thousands of paid rides in US cities every week, with no one in the drivers seat, and an essentially flawless accident record. The future is here, we just need to roll it out to everyone.
mxschumacher · 13h ago
I'd really love to see the technology be used more in long-haul trucking. You could use roads in the middle of the night when capacity utilization is low on highways.
BurningFrog · 8h ago
Human truck drivers can only work 11 hours a day.
Self driving trucks could double that, which would mean much better use of the trucks, faster deliveries, and an "epidemic" of lower costs across the economy.
RajT88 · 20h ago
Cannot wait until it comes outside of metros. My wife and I would love a self-driving carriage every now and again to and from our fav bar a mile down the road.
fragmede · 20h ago
Should, sure, but people are gonna people, and I'd rather this not-so-hypothetical driver be using the latest FSD rather then not paying attention and hoping Autopilot is up to the task.
bjrosen23 · 10h ago
Anyone who claims that FSD doesn't work is flat out lying. I've used FSD 13 for almost all of my driving since it came out last Dec. I've used in in cities like Boston and I've used it on dirt roads in Vermont and Maine. I've used it on highways and on mountain roads. It's worked in the rain, snow and fog. I drove up Cadillac Mountain Maine in the fog. It's stopped for deer, twice, it's stopped for a e-scooter rider who shot out into the street without looking, I would have hit him had I been driving but FSD has faster reactions than a human and more and better sensors so it stopped. It does make mistakes but none have been dangerous. There are some anomalies, for example it swerved around a squished animal but not a pothole.
At this point I feel much safer with FSD driving than by hand driving. Humans only have two eyes, FSD has cameras on the fenders, the B pillars and the rear as well as the cameras on the windshield and it's looking at all of them all the time, that's impossible for a human. The cameras also see better at night than human eyes.
thejazzman · 9h ago
if it's so flawlessly reliable why doesn't tesla release their disengagement data to illustrate it and get approved as level 4?
they literally never have, and only offer "10x better" claims via musk. it's 10x of an unknown number.
it's blatantly obvious that it sometimes works. that doesn't mean it reliably always works and never causes a serious catastrophic failure.
i've never had it work end-to-end where i live. i always have to intercept it. and i own 3 teslas. and i like my teslas. but FSD is not even close to the perfection you're decreeing.
bjrosen23 · 10h ago
Anyone who claims that FSD doesn't work is flat out lying. I've used FSD 13 for almost all of my driving since it came out last Dec. I've used in in cities like Boston and I've used it on dirt roads in Vermont and Maine. I've used it on highways and on mountain roads. It's worked in the rain, snow and fog. I drive up Cadillac Mountain Maine in the fog. It's stopped for deer, twice, it's stopped for a e-scooter rider who shot out into the street without looking, I would have hit him had I been driving but FSD has faster reactions than a human and more and better sensors so it stopped. It does make mistakes but none have been dangerous. There are some anomalies, for example it swerved around a squished animal but not a pothole.
At this point I feel much safer with FSD driving than by hand driving. Humans only have two eyes, FSD has cameras on the fenders, the B pillars and the rear as well as the cameras on the windshield and it's looking at all of them all the time, that's impossible for a human. The cameras also see better at night than human eyes.
ericHosick · 7h ago
> Anyone who claims that FSD doesn't work is flat out lying.
i've had "fsd" for years and basically never use it now. i just don't trust it.
anytime there is a new version update, i do try to have it drive from the house to the market (about 3 miles: two rights at stop signs, two rights and 1 left at stop lights) and there has never been a single time where i didn't have to take over at least once.
and maybe the problem is that i have had "fsd" while it was going through development. the trust is low from the many times it has tried to kill me. so, whenever it is on, there is nothing but stress. and so i'm more apt than not to take over when i see anything even minutely out of the ordinary.
thejazzman · 9h ago
if you repost this comment 3x does it become true?
blooalien · 4h ago
> "if you repost this comment 3x does it become true?"
That's the new "science" (or religion / cult?) surrounding lying these days isn't it? You just have to repeat a lie enough times and declare all the actual facts as "fake news" or "hoax" and the lie becomes truth, and the truth a lie.
mxschumacher · 15h ago
on what grounds is this submission flagged, @dang?
tomhow · 13h ago
Users flagged it. We can only guess why users flag things. We've turned off the flags now.
blooalien · 4h ago
> "We can only guess why users flag things."
The actual reasons behind it are surely as varied as the users themselves... ;)
Might produce an interesting little dataset for the "data science folks" out there to add a "State your reasoning for this Flag:" requester to the flagging process.
mxschumacher · 12h ago
thanks
Rakshith · 21h ago
Clearly the "Union" isnt playing into the "Hate Elon" campaign.
- The scary effect music shows it's intended as a hit piece.
- The constant intermixing of Autopilot and Full Self Driving, two very different things.
- Implying that driving just based on visual input is unsafe, when that is how all humans drive.
Of course, that the video is an unserious hit piece doesn't mean these Tesla features are safe. But I need something more serious to be convinced.
Except that's not at all how humans drive. Sure, there's visual input, but human vision is largely based on expectations. You see what you expect and ignore a lot of things. The predictive engine of the brain does a lot more than evaluate the present environment. This is both good and bad insofar as safety is concerned.
Not really clear what the argument is meant to be here...
An interesting question might be if the automation system can be evaluated against the human perceptual system, or amalgamation of systems. This seems like an exceedingly difficult premise to evaluate though, given the varied and dynamic nature of the real-world driving environment.
Your argument that human processing is superior to Tesla processing seems orthogonal (that's about how the data is handled, not about what input data is available).
I think I would concede the argument that it's "just processing" once someone has recreated the processing in an automation system. That seems unlikely. Or when an automation system outperforms a human in every situation one might encounter in a chaotic driving environment.
Should I go on? This is trivial stuff, man.
No comments yet
Implying that driving just based on single-eye 20/120 vision [1] is safe, when that is in fact illegal.
[1] https://news.ycombinator.com/item?id=44129317
Pretty wild to see several comments condemning their imaginary single camera system. Where do these lies come from?
Here is a diagram: https://www.researchgate.net/figure/Locations-of-cameras-on-...
First of all, it is convenient that you ignored the part about 20/120 vision which is below the legal minimum requirements for driving and thus no human is legally allowed to drive with such poor vision.
As for your misunderstanding, a human has two eyes placed in a binocular orientation on a swivel mount allowing for binocular depth perception when the head is pointed in a direction. That is how humans drive.
Except for the forward direction, the Tesla cameras have largely non-overlapping fields of view. As they are not mounted on a swivel, they only have single-eye vision in any non-forward direction. In the forward direction, the Tesla cameras do have overlapping fields of view, but are not only too close to support binocular perception, they also have different focal distances preventing binocular perception. As such, Teslas only have single-eye vision in the forward direction as well. So, they only have single-eye perception for objects.
Adding on their visual acuity far below minimum requirements for legal human driving, it is safe to say that their sensors are, in fact, not at all how humans drive. Any manufacturer who would claim as such is making false claims willfully and knowingly.
Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries?
Personal computer appliance users truly blame themselves for the failures of the devices, no matter the degree of wreckage and injury caused by bad design, while cheering the men (unbearable a*holes) who foist malfunctional and dangerous tech upon them.
The captains of this industry are notorious for refusing to take responsibility for their mistakes and actively planning to deploy tech they themselves claim is hazardous, while being continually cheered by investors hungry to make a killing.
Tesla is a case study for the world about the hazards of California Ideology libertarianism and the precedence of greed over personal responsibility and justice.
Since Ronald Reagan, personal responsibility has never been a libertarian (Republican) trait. It's always "Oops I did it again!" and "I forgot!"
No surprise the "trolley problem" is the signature thought experiment for the industry as its technocrats constantly hunt for ways to escape responsibility and seek unearned profits.
With the woeful performance of Musk's cars and robots, DOGE fiasco, Federal schedule drug habit, goonerism, and inability to maintain personal relationships, Musk's plans for a mars adventure are psychotic.
But what fun to watch!
The civil justice system is slow (and COVID made the whole justice system even slower for quite a while), but aside from that, maybe they're not? A quarter billion dollar verdict for a 2019 crash was recently returned against them.
https://www.reuters.com/legal/litigation/tesla-rejected-60-m...
2) Tesla's fraud eclipses Nikola's few billion amateur hour.
> Just want to be absolutely clear that everyone’s top priority is achieving an amazing Autopilot demo drive. Since this is a demo, it is fine to hardcode some of it, since we will backfill with production code later in an OTA update. I will be telling the world that this is what the car will be able to do, not that it can do this upon receipt
You could maybe call it staged if they were, say, very selective about the conditions and course, but within those constraints it was actually doing what they said it did. That doesn't seem to have been the case.
I was glad when they started charging for it, 'cuz it just meant fewer dangerous Teslas on the road.
I have no doubt we'll get to full autopilot...eventually, and we've "gotten there" already with adaptive cruise control, BUT in the interim, if you can't pay full attention while driving you shouldn't be driving.
Things start to improve again once the features get even more capable.
IMO Tesla fsd is well past the hump compared to most current cars with acc+lateral.
Self driving trucks could double that, which would mean much better use of the trucks, faster deliveries, and an "epidemic" of lower costs across the economy.
they literally never have, and only offer "10x better" claims via musk. it's 10x of an unknown number.
it's blatantly obvious that it sometimes works. that doesn't mean it reliably always works and never causes a serious catastrophic failure.
i've never had it work end-to-end where i live. i always have to intercept it. and i own 3 teslas. and i like my teslas. but FSD is not even close to the perfection you're decreeing.
i've had "fsd" for years and basically never use it now. i just don't trust it.
anytime there is a new version update, i do try to have it drive from the house to the market (about 3 miles: two rights at stop signs, two rights and 1 left at stop lights) and there has never been a single time where i didn't have to take over at least once.
and maybe the problem is that i have had "fsd" while it was going through development. the trust is low from the many times it has tried to kill me. so, whenever it is on, there is nothing but stress. and so i'm more apt than not to take over when i see anything even minutely out of the ordinary.
That's the new "science" (or religion / cult?) surrounding lying these days isn't it? You just have to repeat a lie enough times and declare all the actual facts as "fake news" or "hoax" and the lie becomes truth, and the truth a lie.
The actual reasons behind it are surely as varied as the users themselves... ;)
Might produce an interesting little dataset for the "data science folks" out there to add a "State your reasoning for this Flag:" requester to the flagging process.