Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They made this public immediately and it was physically impossible to avoid. It ran out basically right in front of the vehicle between some cars. A human driver would have hit the dog too unfortunately.


This is where I think edge cases favor the human driver.

Imagine driving by a playground and, instead of a dog, you see a ball roll across the street in front of your car, but you can already intuit that it is rolling fast enough that it won't be an obstacle in your lane. A human driver can intuit from the context (playground + ball) there is a chance a child may come running after that ball, from between those same cars.

While self-driving tech may be faster at 'seeing and classifying'* objects, is it able to predict from a larger context? I haven't seen much to indicate it can. I think that's where a lot of the edge cases are going to come from. Driving is more than just 'see object and make a decision'.

*Because most of the tech is relatively opaque to the public, this performance seems to be generally taken on faith. But what the Uber accident in AZ showed is that (with that system, at that time, at least) the self-driving tech really wasn't very good at classification in real-time.


The dog fatality might not be a human fatality but it’s still a reported fatality.

Also relevant is when Anthony Levandowski at Google / Waymo caused a freeway incident through unsafe testing: https://www.businessinsider.com/anthony-levandowski-google-s...

While at the time the DMV did not require reporting, later when Levandowski was on trial Google / Waymo hid the video of the incident when it’s existence was finally reported on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: