Embodiment being a killer feature for AGI seems weird to me. Not only is "what is a body" vague, but it is entirely possible to build a body. So why would it prevent AGI?
We do know that embodiment is a sufficient condition for general intelligence: if we assume humans have general intelligence then it is clearly a sufficient condition. But the question of necessary is more interesting because we have to actually ask what embodiment means.
Is a computer a embodied machine? What about a robot that can explore its environment? What about a simulation with an environment? If no, what makes us distinct? If yes, does embodiment even matter?
To me it is clear that feature space is the more important issue. It is also clear that embodiment helps with creating a more complex and rich feature space. The ability to move around and interact with your environment greatly expands the complexity of the environment.
I think the bigger question is about our ability to create rich enough environments to generate intelligence. Even if we can get machines in bodies, can we get them into the complex and evolving environmental pressures that we experienced over millions of years (without robots living for similar timescales)? It is reasonable to think that at some point in time we'd be able to have that kind of computational power. It is also possible that the learning function is incredibly difficult. With a large and complex feature space there are many local extrema and it may be possible that general intelligence is only possible with a few of these (essentially we can have an estimation similar to the Drake Equation). But overall, I'm not sure there really is any issue that means AGI is impossible. Maybe at current knowledge and computational limits, maybe for all of the foreseeable future! But I don't see any limitations in physics that are killers.
> We do know that embodiment is a sufficient condition for general intelligence: if we assume humans have general intelligence then it is clearly a sufficient condition.
I'm not sure what you mean by "sufficient condition" here. Consider:
"We do know that having a moustache is a sufficient condition for general intelligence: if we assume moustachioed humans have general intelligence then it is clearly a sufficient condition."
You’re rightly confused because the GP formulated a non-sequitur. Embodiment is if anything a necessary, but not sufficient condition for the human brain to develop intelligence. It’s not a sufficient condition on its own for general intelligence; cats are embodied too.
I would not agree that embodiment is a necessary condition for human level intelligence.
The reason I use sufficient is more broad. A cat does have intelligence. Human level? No. Intelligence? Yes. As I explained in the post, embodiment enables a rich feature space, which is what makes it a sufficient condition. It isn't just the simple act of having a body, but the ability to interact with the environment creating a more rich environment. I cannot think of any creature (by definition all having bodies) that doesn't have some form of intelligence. But we need to distinguish "human level" intelligence from "intelligence" and "human level" from "human like." These are different things.
Embodiment being a killer feature for AGI seems weird to me.
For the record, I have just argued in a different thread, that this stuff is not needed for "AGI", at least not the way I perceive it as being defined. And I believe that way is consistent with others in the field.
But... if you'll allow my use of the distinction between "human level" intelligence and "human like" intelligence, then I will say that I think embodiment is important to the latter. Why? Because I believe a lot of our learning is experiential, and especially the learning that yields a lot of our very basic "model of the world" ideas. Take our "intuitive metaphysics" - there are objects in the world. Objects can't be in two different places at the same time. Two objects can't be in the same place at the same time. Etc. etc. And likewise our "intuitive epistemology" which we use to decide what things are true, and so on. I believe that it will be very difficult (although perhaps not impossible) to give an AI very human like equivalents to these things, as well as "intuitive physics" (things fall over when they're off balance, you can't stand a pencil up on it's sharpened point, etc.) without having it "experience" a lot of these things.
Now the really interesting spin on that is whether or not a virtual body in a simulation would suffice to a degree. If you built a really hyper-realistic "fake world" using a really advanced game engine with somewhat realistic physics and what-not, and "put" the AI "in" that world... maybe it would learn some, or most, or even all, of what we learn. I doubt it would be "all", but who knows?
> the distinction between "human level" intelligence and "human like" intelligence
I 100% agree with this and think it is a important distinction. I'm glad you brought it up. One of my hobbies has been reading a lot of linguistics and about languages. There's the whole linguistic relativism topic at hand. When you learn just a little about linguistics you find that embodiment is embedded into our language, as the last sentence gave an example of (the use of "hand," which was likely unnoticed). There are lots of cultural references (especially with Americans) that make things more difficult too. Much of our language is dependent upon this multi-agent factor (I'd argue that language itself was born because we are social creatures). There's general language patterns that arise because of embodiment, environment, and culture. This affects the way we think. So I think this distinction between "human level" and "human like" is an important one. If AGI is not trained in a similar fashion to human growth and history it would have very different thinking styles, wants, and needs. But that wouldn't prevent it from being hyper-intelligent. I'll leave with an overly simplified saying
> If a lion could speak, I would not understand it.
I think if not necessary it's probably the fastest way to AGI, given that AGI includes "common sense" in order to interact with other humans, ala Turing Test.
In order for a machine to have this "human interface", we will have to share the same environments in which we learn, and simulating the real world is more expensive than building a robot with some kind of software AI that together simulate a human. In other words, it's easier to actually use the world instead of simulating it (at least to the same degree of detail, so at least it becomes increasingly cheaper).
We do know that embodiment is a sufficient condition for general intelligence: if we assume humans have general intelligence then it is clearly a sufficient condition. But the question of necessary is more interesting because we have to actually ask what embodiment means.
Is a computer a embodied machine? What about a robot that can explore its environment? What about a simulation with an environment? If no, what makes us distinct? If yes, does embodiment even matter?
To me it is clear that feature space is the more important issue. It is also clear that embodiment helps with creating a more complex and rich feature space. The ability to move around and interact with your environment greatly expands the complexity of the environment.
I think the bigger question is about our ability to create rich enough environments to generate intelligence. Even if we can get machines in bodies, can we get them into the complex and evolving environmental pressures that we experienced over millions of years (without robots living for similar timescales)? It is reasonable to think that at some point in time we'd be able to have that kind of computational power. It is also possible that the learning function is incredibly difficult. With a large and complex feature space there are many local extrema and it may be possible that general intelligence is only possible with a few of these (essentially we can have an estimation similar to the Drake Equation). But overall, I'm not sure there really is any issue that means AGI is impossible. Maybe at current knowledge and computational limits, maybe for all of the foreseeable future! But I don't see any limitations in physics that are killers.