Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is important to note that this is with safety drivers. Professional driver + their most advanced "Robotaxi" FSD version under test with careful scrutiny is 4x worse than the average non-professional driver alone and averaging 57,000 miles per minor collision.

Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.

[1] https://www.tesla.com/fsd/safety

 help



Robotaxi supevision is just an emergency brake switch.

Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.


Nah the relevant factor, which has been obvious to anyone who cared to think about this stuff honestly for years, is that Tesla's safety claims on FSD are meaningless.

Accident rates under traditional cruise control are also extremely below average.

Why?

Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!

Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.


Yup, after getting a Tesla with a free FSD trial period, it was obviously a death trap if used in any kind of slightly complex situation (like the highway on-ramp that was under construction for a year).

At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.

The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.

If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.

Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.


> the same amount of attention as supervising a teenager in the first week of their learning permit.

Yikes! I’d be a nervous wreck after just a couple of days.


You learn when it’s good and bad. It definitely has a “personality”. It is awesome in certain situations, like bumper to bumper traffic.

I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.


The most basic adaptive cruise control is "awesome" in bumper to bumper traffic.

For some reason many manufacturers intentionally disable it at low speeds or won’t let it restart from stop. Super annoying and entirely unnecessary

I understand not restarting from a stop unprompted. There are simply too many situations on the road where automatically moving from a stop may be undesirable in case the driver isn't paying attention. Stop signs, four way stops, yield situations, probably more. Safer overall to make it an intentioned action by the driver.

I kind of get it but it reality sucks in bumper to bumper. But why cut off at 25mph? Like I can’t use it in a camera zone to maintain snail speed below the camera threshold

> It definitely has a “personality”.

You mean it has obvious bugs.


Also, if it actually worked, Tesla's marketing would literally never shut up about it because they have a working fully self-driving car. That would be the first, second, and third bullet point in all their marketing, and they would be right to do that. It's an incredible feature differentiator from all their competition.

The only problem is, it doesn't work.


If it actually worked, Tesla would include an indemnity clause for all accidents while it’s active.

More importantly, we would have independent researchers looking at the data and commenting. I know this data exists, but I've never seen anyone who has the data and ability to understand it who doesn't also have a conflict of interest.

> Robotaxi supevision is just an emergency brake switch

That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.

Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.


Interesting.

I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.


Humans are extremely bad at vigilance when nothing interesting is happening. Lookout is a life critical role on the railways you might be assigned as a track worker where your whole job is to watch for railway trains and alert your co-workers when one is coming, so they retreat to a safe position while it passes. That seems easy, and these are typically close friends, you work with them every day rotating roles, you'd certainly not want them injured or killed - but it turns out it's basically impossible to stay vigilant for more than an hour or two tops. Having insisted that you aren't tired, since you're just stood somewhere watching while your mates are working hard on the track, you nevertheless lose focus and oops, a train passes without your conscious awareness and your colleague dies or has a life-changing injury.

This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.


> Humans are extremely bad at vigilance when nothing interesting is happening

It would be interesting to try training a non-human animal for this. It would probably not work for learning things like rules of the road, but it might work for collision avoidance.

I know of at least two relevant experiments that suggest it might be possible.

1. During WWII when the US was willing to considered nearly anything that might win the war (short of totally insane occult or crackpot theories that the Nazis wasted money on) they sponsored a project by B.F. Skinner to investigate using pigeons to guide bombs.

Skinner was able to train pigeons to look at an image projected on a screen that showed multiple boats, a mix of US and Japanese boats, and move their heads in a harness that would steer a falling bomb to a Japanese boat. They never actually deployed this, but they had tests in a simulator and the pigeons did a great job.

2. I can't give a cite for this one, because I read it in a textbook over 40 years ago. A researcher trained pigeons to watch some parts coming off an assembly line, and if they had any visible defects peck a switch.

There were a couple really clever things about this. To train an animal to do this you have to initially frequently reward them when they are right. When they have learned the desired behavior you can then start rewarding them less frequently and they will maintain the behavior. You will have to keep occasionally rewarding correct behavior though to keep the behavior from eventually going away.

The way they handled this ongoing occasion reward was to use groups of 3 pigeons. The part rejection system was modified to go with a majority vote. Whenever it was not unanimous the 2 pigeons in the majority got a reward. This happened frequently enough to keep the behavior from going extinct in the birds, but infrequently enough to avoid fat pigeons.

Once they had 3 pigeons trained by a human deciding on the rewards during the initial training when you need frequent rewards and got them so they were working great on the line, they could use those 3 to train more. They did that by adding the trainee as a 4th member of the group. The trainee's vote was not counted, but if the other 3 were unanimous and the trainee agreed the trainee was rewarded. This produced the frequent rewards needed to establish the behavior.

The groups of 3 pigeons could do this all day with an error rate orders of magnitude lower than the error rate of the human part inspector. The human was good at the start of a shift, but rapidly get worse after as their shift goes on.

Ultimately the company that had let the researchers try this decided not to actually have it used in production. They felt that no matter how much better the pigeons did and how much they publicly documented that fact ads from competitors about how that company is using birds to inspect their parts would cost too many sales.


See also:

>Jack (died 1890) was a Chacma baboon who was an assistant to a disabled railway signalman, James Wide, in South Africa.

>Jack was the pet and assistant of double leg amputee signalman James Wide, who worked for the Cape Town–Port Elizabeth Railway service. James "Jumper" Wide had been known for jumping between railcars until an accident where he fell and lost both of his legs below the knee. To assist in performing his duties, Wide purchased Jack in 1881, and trained him to push his wheelchair and to operate the railways signals under supervision.

>An official investigation was initiated after someone reported that a baboon was observed changing railway signals at Uitenhage near Port Elizabeth.

>After initial skepticism, the railway decided to officially employ Jack once his job competency was verified. He was paid twenty cents a day, and half a bottle of beer each week. It is widely reported that in his nine years of employment with the railway company, Jack never made a single mistake.

https://en.wikipedia.org/wiki/Jack_(baboon)


Maybe human part inspectors should get treats for inspecting parts correctly

No info on that.

13781-13647 Street, Other fixed object, No injuries, Proceeding Straight, 17mph, contact area: bottom

13781-13648 Street, Bus, No injuries, Stopped, 0mph, contact area: left, front

13781-13646 Parking lot, Other fixed object, No injuries, Backing, 2mph, contact area: bottom

13781-13645 Parking lot, Pole / Tree, No injuries, Backing, 1mph, contact area: rear right

13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left


Copy. Thanks for digging that up.

> And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.

Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.


Similarly Tesla using Teleoperators for their Optimus robots is a safety fake for robots that are not autonomous either. They are constantly trying to cover there inability to make autonomous anything. Cheap lidars or radar would have likely prevented those "hitting stationary objects" accidents. Just because the Furher says it does not make it so.

They had supervisors in the passenger seat for a whole but moved them back to the drivers seat, then moved some out to chase cars. In the ones where they are in driver seat they were able to take over the wheel weren't they?

So the trillion dollar company deployed 1 ton robots in unconstrained public spaces with inadequate safety data and chose to use objectively dangerous and unsafe testing protocols that objectively heightened risk to the public to meet marketing goals? That is worse and would generally be considered utterly depraved self-enrichment.

We also dump chemicals into the water, air, and soil that aren't great for us.

Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.


EU has one good regulation ... if safety can be engineered in it must be.

OSAH also has regulation to mitigate risk ... tag and lock out.

Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.

Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.


Yes, those are bad as well. Are you seriously taking as your moral foundation that we need to poison the water supply to ensure executives get their bonuses? Is that somehow not utterly depraved self-enrichment?

Sorry, that didn't translate well. I'm not in favor of it. I'm simply saying that many many many companies operate under the condition that external problems are a natural part of doing business.

To be clear, I'm not in support of dumping chemicals into the world, just calling out that experimenting on the public with large robotic cars is perfectly in line with American business practice.


That just makes the Robotaxi even more irresponsible.

I think they were so used to defending Autopilot that they got confused.

I would guess the FSD numbers get help from drivers taking over during difficult situations and use weighted towards highway miles?

not to mention turning off FSD milliseconds before impact

There's no evidence for that. And there's no sense for Tesla doing it.

Politics is really a mind killer. Just think for a second. Who can be fooled by this "turning off FSD milliseconds before impact"?


To be fair to Tesla and other self driving taxis, urban and shorter journeys usually have worse collision rates than the average journey - and FSD is likely to be owners driving themselves to work etc.

Great, we can use Tesla's own numbers once again by selecting non-highway. Average human is 178,000 non-highway miles per minor collision resulting in "Professional Driver + Most Advanced 'Robotaxi' FSD version under test with careful scrutiny" at 3x worse than the average non-professional driver alone.

They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.


Would be nice if we had a functioning legislative body that did more than pass a single "give billionaires more tax breaks" bill each term.

It is kind of comparing apples to oranges. The more appropriate would be to compare it with other Taxis.

https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...

Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.


I'm sure insurers will love your arguments and simply insure Tesla at the exact same rate they insure everyone else.

I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.


Large fleet operators tend to self insure rather than having traditional auto insurance for what it's worth.

If you have a large fleet, say getting in 5-10 accidents a year, you can't buy a policy that's going to consistently pay out more than the premium, at least not one that the insurance company will be willing to renew. So economically it makes sense to set that money aside and pay out directly, perhaps covering disastrous losses with some kind of policy.


> I'm sure insurers will love your arguments and simply insure Tesla at the exact same rate they insure everyone else.

Insurers would charge 4 times as much for insurance I think. Which matches what I've seen when quoting insurance for Teslas before.


Always comes up but think it's worth repeating: if he's not there the stock will take a massive haircut and no Tesla investor wants that regardless of whether it would improve Tesla's car sales or its self-driving. Elon is the stock price for the most part. And just to muse on the current reason, it's not Optimus or self driving, but an eventual merger with SpaceX. My very-not-hot take is that they'll merge within months of the SpaceX IPO. A lot of folks say it ain't happening, but I think that's entirely dependent on how well Elon and Trump are getting along at the moment the merger is proposed (i.e., whether Trump gives his blessing in advance of any announcement).

Tesla's only chance at this point is government money. Consumers just aren't buying. It doesn't help that Elon was heavily involved with Epstein and is constantly spouting white nationalist propaganda on X. This is on top of his gaffe with "My Heart Goes Out to You". Only a certain type of consumer is going to buy from a company like that.

What form would those funds take? I would agree that the government could pull one lever that would cause Tesla's sales to spike and that would be reintroducing the ev credit. To really juice them they'd have to reintroduce and increase it. I don't think there's another lever they have at their disposal that would do anything material. The government buying a bunch of vehicles for a single or multiple departments wouldn't move the needle. Basically you have to incentivize the masses to purchase. Of course none of that would happen with the current admin and congress. EV's are anathema to the platform.

As an aside, the situation at Tesla sure is getting stranger. I don't know if it was yesterday or earlier in the week, but Elon saying that at least one Cybercab will be sold to a "consumer" before the end of '26 for under $30k makes no sense (yeah yeah promises promises). But wasn't the idea that Tesla would control the fleet? Why would they sell a person a Cybercab to operate as a taxi? That would mean that there's profit to be had by that buyer and so why the heck wouldn't Tesla just keep that profit for itself and run the entire operation? Some kind of balance sheet gimmick? Offloading the insurance risk to someone else?

Maybe someone reading this long-ass reply will clue me in. And I get it the majority of the folks these days think it's all vaporware, but doesn't the vaporware at least have to make some sense?


There are a couple different schemes that are used to distribute taxpayer money to cronies. The most common are defense contracts for stuff like the proverbial $1000 hammers and such. There are also infrastructure deals, energy deals, subsidies, and bailouts. I know Musk was pushing for defense contracts earlier but they mostly fell through.

https://www.npr.org/2025/02/24/nx-s1-5305269/tesla-state-dep...

> That would mean that there's profit to be had by that buyer and so why the heck wouldn't Tesla just keep that profit for itself and run the entire operation?

I suspect this is because they have less confidence in the ability of the cab to pay for itself and would rather offload that financial risk on the buyer.


Yup as context, in the same time Waymo had 101 collisions according to the same NHTSA dataset.

Waymo drives 4 million miles every week (500k+ miles each day). Vast majority of those collisions are when Waymos were stationary (they don’t redact narrative in crash reports like Tesla does, so you know what happened). That is an incredible safety record.

Is this the same time or the same miles driven? I think the former, and of course I get that's what you wrote, but I'm trying to understand what to take away from your comment.

The old FSD was mostly used on freeways that naturally have a much lower incident rate per mile. And a lot of incidents that happen are caused by inattention/fatigue.

So this number is plausible.


Living, breathing drivers have incentives not to crash

Gigantic lithium batteries on wheels guided by WIP software do not


The former doesn't prevent tens of thousands deaths each year.

I only flip on FSD when on the highway. It has come a long way but still too many problems on local roads.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: