The question that comes up for me at this point is whether there is much that is dispensable about humans when it comes to exhibiting intelligent behavior (running on 100 watts, no less). It turned out that for abstracting useful flight dynamics based on birds, there was a rather simple rule: lift > weight. Sure, reducing it to that simple formula may not help you build a machine that maneuvers as well as birds and insects do, but we didn't need that for flight. We just wanted to cross an ocean in less than 6 weeks.
Whether the flight analogy carries on to intelligence, in my mind, depends on how many of our subsystems are 1) indispensable for intelligence, and 2) reasonably computationally reducible.
From neurotransmitters to ganglia cells to hormones and bacteria in the gut, we have found a lot of subsystems that contribute to our abilities to make diverse, everyday decisions that the ideal AI we are discussing would have to make. The cortex actually seems like one of the most orderly and therefore reducible parts of the apparatus. The hormonal system that regulates emotion based decision making may be far more difficult to abstract and less efficient to model. And there are many many other systems. Could it be that without details of those subsystems, our AI behaves in less than optimal ways the same way a human would? How much can we get away with reducing biology to simpler rules while maintaining general intelligence?
It is possible I suppose that all those biological dependencies are merely hampering an ideal algorithm for generalized intelligence that we are only crude approximations of, a powerful and simple algorithm we can finally free of biological constraints, -- but it's too late to get into the probability of that hypothesis! In any case it's not clear to me how that kind of nonhuman intelligence would serve us.
You're falling into the same naturalistic fallacy the likes of Clement Ader fell into: thinking you had to imitate nature to get flight (or AI) done. I trust the underlying principles of intelligence are much simpler than current implementations.
Whether the flight analogy carries on to intelligence, in my mind, depends on how many of our subsystems are 1) indispensable for intelligence, and 2) reasonably computationally reducible.
From neurotransmitters to ganglia cells to hormones and bacteria in the gut, we have found a lot of subsystems that contribute to our abilities to make diverse, everyday decisions that the ideal AI we are discussing would have to make. The cortex actually seems like one of the most orderly and therefore reducible parts of the apparatus. The hormonal system that regulates emotion based decision making may be far more difficult to abstract and less efficient to model. And there are many many other systems. Could it be that without details of those subsystems, our AI behaves in less than optimal ways the same way a human would? How much can we get away with reducing biology to simpler rules while maintaining general intelligence?
It is possible I suppose that all those biological dependencies are merely hampering an ideal algorithm for generalized intelligence that we are only crude approximations of, a powerful and simple algorithm we can finally free of biological constraints, -- but it's too late to get into the probability of that hypothesis! In any case it's not clear to me how that kind of nonhuman intelligence would serve us.