Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is a muddled, ridiculous mess. I read the first few paragraphs and couldn't motivate myself to do any more than skim the rest. As far as I can tell, there's nothing new here, and the author's argument that "AGI will not be realized" might be true if you stick to his ad-hoc definition of AGI, which seems to conflate "human level" intelligence and "human like" intelligence.

Yes, it's probably true that AI's will not have "human like" intelligence, for some of the reasons cited. Lack of embodiment and the associated experiential learning is the chief reason that I would personally cite for why this is true. However, that line of reasoning is completely irrelevant unless A. make the mistake of conflating "human like" and "human level" OR B. you very specifically demand that your AI must be "human like."

Everybody else realizes that the goal is to build an AI that is as general as human intelligence, not necessarily to build an artificial human.

Edit:

To go back to the embodiment issue for a moment... I think embodiment is important. I've been playing around with building a trivial little shell to pack some AI research in, that can be carried around (initially), and "experience" the world via a variety of different sensors. And I do think, again, that embodiment will probably be necessary to get an AGI that can "act human". I just don't see that as being the goal. Yeah, yeah, Turing Test, blah, blah, I know. As much respect as I have for Turing (and it's a lot, obviously) I don't actually consider the Turing Test to be very interesting, vis-a-vis evaluating an AI. In fact, I think focusing on it could be harmful, because it seems that getting an AI to pass it amounts to teaching the AI to lie well. This seems counter-productive to me.

As for why I think embodiment would matter to making a "human like" (as opposed to "human level") AGI: it mainly comes down to experiential learning. Imagine, if you will, what you know about the meaning of terms like "fall", or "fall down". How much of your knowledge of this is rooted in that fact that you, in your body, have fallen down? And how does that play into your ability to construct metaphors involving other things "falling"? And so on.

But I don't think any of this stuff is necessary to make an AGI that can operate at a human level of generality and solve useful problem on our behalf. And by "operate at a human level of generality" I mean something approximately like "the same AI software, with appropriate training, can do anything from playing chess, to driving a car, to coming up with new theories in physics and chemistry (and so on).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: