This is why the desire for Strong AI boggles my mind. In order for a computer to operate at a "human" level, it would need to make decisions based on things like ambition and fear and greed. It will also have to constantly make mistakes, just like we do.
If it didn't have character flaws, it wouldn't be operating at a "human" level. But if it does have these character flaws, how useful would it really be compared to a real human? Is the quest for Strong AI just a Frankensteinian desire to create artificial life?
I'm curious if there are any good papers looking into stuff like this.
Presumably the AI in the Google cars must have something like a fear of crashing or hitting a pedestrian even if its just something like score that the algorithms calculate.
If it didn't have character flaws, it wouldn't be operating at a "human" level. But if it does have these character flaws, how useful would it really be compared to a real human? Is the quest for Strong AI just a Frankensteinian desire to create artificial life?
I'm curious if there are any good papers looking into stuff like this.