Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> GPT-3 doesn't have any knowledge of how the world actually works. It only appears to have some level of background knowledge, to the extent that this knowledge is present in the statistics of text. But this knowledge is very shallow and disconnected from the underlying reality.

Without excessive effort, humans don't have any knowledge of how the world actually works. They only appear to have some level of background knowledge, to the extent that this knowledge is present in their faint memories. But this knowledge is very shallow and disconnected from the underlying reality.



This is just not true.

For example, all humans have the notion of object permanence, developed by about six months of life. Object permanence is the notion that things don't go away just because you can't see them.

ML systems need to be specifically trained to have object permanence, and GPT-3 almost certainly does not possess it.

Like, I get that it's hip to booster ML and GPT-3, and all of the stuff humans can do seem trivial, but it's really not the case and is something that is holding progress in AI back massively.


Object permanence isn't even defined in the stateless world of the GPT-3 API.

My comment was merely a jab at Yann's poor argument. I don't find humans to be trivial at all, but neither do I believe that they are infinitely complex.

The linked Nabla article is fair, albeit I would appreciate more technical details. It seems to be using the API in zero-shot fashion, which is not what one would do to get the most out of it.


Yeah, agreed. My concern is that object permanence isn't even defined in self driving models;)


I understand that you are being sarcastic, but I'll pretend that you're not ;)

There was a video showing the pattern recognition layer of a car 'forget' about objects it had previously correctly identified. There clearly is room for improvement, and some level of concern is warranted, but it isn't as much of a big deal as it seems, because surely at the higher level objects are remembered. To draw an analogy with human cognition, humans also aren't consciously aware of all the objects in their field of view all the time, but that doesn't mean they forget their existence.


I'm honestly not. I've seen that video, and given my experience of vision models, am actually concerned that object permanance is not being focused on.

> There clearly is room for improvement, and some level of concern is warranted, but it isn't as much of a big deal as it seems, because surely at the higher level objects are remembered.

I would be very surprised if this were true.

> To draw an analogy with human cognition, humans also aren't consciously aware of all the objects in their field of view all the time, but that doesn't mean they forget their existence.

This is exactly object permanence, in that babies start looking in the right direction for objects that disappear. I assume that Waymo at least have some kind of model for this, but am not convinced that it's particularly effective.

Personally, I think that self-driving will require a limited theory of mind model, and we are nowhere near that.


> I would be very surprised if this were true.

At the very least there must be some kind of a trajectory model which implies that multiple observations of the same object are analyzed as a sequence. How would it work without object permanence? Would it just erase objects undetected in the current frame and pretend they never existed? I suspect you're too focused on the vision aspect of the vehicle control system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: