Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Waymo alone services tens of thousands of autonomous rides per month (edit: see sibling comment, I was out of date, it's currently hundreds of thousands of rides per month -- but see, increasingly)

But they aren't particularly autonomous, there's a fleet of humans watching the Waymos carefully and frequently intervening for the case where every 10-20 miles or so the system makes a stupid decision that needs human intervention: https://www.nytimes.com/interactive/2024/09/03/technology/zo...

I think Waymo only releases the "critical" intervention rate, which is quite low. But for Cruise the non-critical interventions was every 5 miles and I suspect Waymos are similar. It appears that Waymos are way too easily confused and left to their own devices make awful decisions about passing emergency vehicles, etc.

Which is in fact consistent with what self-driving skeptics were saying all the way back in 2010: deep learning could get you 95% of the way there but it will take many decades - probably centuries! - before we actually have real self-driving cars. The remote human operators will work for robotaxis and buses but not for Teslas.

(Not to mention the problems that will start when robotaxis get old and in need of automotive maintenance, but the system didn't have any transmission problem scenarios in its training data. At no time in my life has my human intelligence been more taxed than when I had a tire blowout on the interstate while driving an overloaded truck.)



The link you gave does not support your claims about Waymo, it's just speculation.

What "critical" intervention rate are you talking about? What network magically supports the required low latencies to remotely respond to an imminent accident?

How does your theory square with events like https://www.sfchronicle.com/sf/article/s-f-waymo-robotaxis-f... that required a service team to physically go and deal with the stuck cars, rather than just dealing with them via some giant remotely intervening team that's managed to scale to 10x rides in a year? (Hundreds of thousands per month absolutely.)

Sure, there's no doubt a lot of human oversight going on still, probably "remote interventions" of all sorts (but not tele-operating) that include things like humans marking off areas of a map to avoid and pushing out the update for the fleet, the company is run by humans... But to say they aren't particularly autonomous is deeply wrong.

I would be interested if you can dig up some old skeptics, plural, saying probably centuries. May take centuries, sure, I've seen such takes, they were usually backed by an assumption that getting all the way there requires full AGI and that'll take who knows how long. It's worth noticing that a lot of such tasks assumed to be "AGI-complete" have been falling lately. It's helpful to be focused on capabilities, not vague "what even is intelligence" philosophizing.

Your parenthetical seems pretty irrelevant. First, models work outside their training sets. Second, these companies test such scenarios all the time. You'll even note in the link I shared that Waymo cars were at the time programmed to not enter the freeway without a human behind the wheel, because they were still doing testing. And it's not like "live test on the freeway with a human backup" is the first step in testing strategy, either.


> What "critical" intervention rate are you talking about? What network magically supports the required low latencies to remotely respond to an imminent accident?

I was being vague - Waymo tests the autonomous algorithms with human drivers before they are deployed in remote-only mode. Those human drivers rarely but occasionally have to yank control from the vehicle. This is a critical intervention, and it seems like the rates are so low that riders almost never encounter a problem (though it does happen). Waymo releases this data, but doesn't release data on "non-critical interventions" where remote operators help with basic problem solving during normal operations. This is the distinction I was making and didn't phrase it very clearly. I think those people are intervening at least every 10-20 miles. And since those interventions always involve common-sense reasoning about some simple edge case, my claim is that the cars need that common-sense reasoning in order to get rid of the humans in the loop. I am not convinced that there's even enough drivers in the world to generate the data current AI needs to solve those edge cases - things like "the fire department ordered brand new trucks and the system can't recognize them because the data literally doesn't exist."

> First, models work outside their training sets.

This is incredibly ignorant, pure "number go up" magical thinking. Models work for simple interpolations outside their training data, but a mechanical failure is not an interpolation, it's a radically different change which current systems must be specifically trained on. AI does not have the ability to causally extrapolate based on physical reasoning like humans. I had never experienced a tire blowout but I knew immediately what went wrong, relying on tactile sensations to determine something was wrong in the rear right + basic conceptual knowledge of what a car is to determine the tire must have exploded. Even deep learning's strongest (reality-based) advocates acknowledge this sort of thinking is far beyond current ANNs. Transformers would need to be trained on the scenario data. There are mitigations that might work: simply coming to a slow stop when a separate tire diagnostic redlines, etc. But these might prove bitter and unreliable.

> Second, these companies test such scenarios all the time.

No they don't! The only company I am aware of which has tested tire blowouts is Kodiak Robotics, and that seemed to be a slick product demo rather than a scientific demonstration. I am not aware of any public Waymo results.


> Which is in fact consistent with what self-driving skeptics were saying all the way back in 2010: deep learning could get you 95% of the way there but it will take many decades - probably centuries! - before we actually have real self-driving cars. The remote human operators will work for robotaxis and buses but not for Teslas.

If this is the end result, this is already a substantial business savings.


Centuries seems like quite a stretch, we haven't even been doing this computer stuff for one century yet.


The problem is not "computers," it's intelligence itself. We still don't know how even the simplest neurons actually work, nor the simplest brains. And we're barely any closer to scientific definitions of "intelligence," "consciousness," etc than we were in the 1800s. There are many decades of experiments left to do, regardless of how fancy computers might be. I suspect it will take centuries before we make dog-level AI because it will take centuries to understand how dogs are able to reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: