We also have to remember that the Turing test isn't a real measure of intelligence. After all a computer following rules is NOT any more intelligent than the rules themselves.
For example, if someone gave me a ton of rules on how to convert sentences from English to another language, and the output was amazing, would that mean I'm intelligent? Not likely, just that I can follow the language conversion rules.
I think the issue is that too much emphasis is being put on Turing tests as a measure of intelligence when in fact it's really more a measure of how well a computer can follow conversations and social norms. Just because you can fool a real person, doesn't mean you're intelligently interacting with that person. Just like if I can fool another person I can speak another language by following translation rules, it doesn't actually mean I can speak the other language at all!
I disagree with your conclusion that such a system is inherently not intelligent.
If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. To do so would expose the fact that your definition of "intelligence" secretly includes the class "...and is a human," which makes "intelligent machine" a contradiction of terms. It is essentially an example of the No True Scotsman fallacy, because you're revealing a new facet of your claim when faced with an apparent counterexample.
If you're defining "intelligence" to be a purely human trait, then come right out and say so, and everyone will agree that on your terms a machine cannot be intelligent. Of course, I would argue that such a definition isn't very useful, since it basically means that the adjectives "intelligent" and "humans" are synonyms.
"If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. "
I would say that passing the Turing test is a necessary but insufficient measure of complete human intelligence. It would be astounding, and a major feat in the field, but there must be more to the definition of human intellect than simply carrying on a text conversation.
I would think it depends entirely on the subject matter of the text conversation. It's probably possible today to make a chat bot that can converse about (nothing other than) the weather.
Agreed. But we'd have to agree on what standard of intelligence we're considering. Turing is only one standard. And I would say not a very high one if being compared to human intelligence.
Eh.. The Turing test (Turing was a man, and the Turing test is a concept posed by him) is restricted in the way that it is to eliminate irrelevant factors such as robotics.
The concept itself that is presented is quite sound: If you cannot tell that it's not intelligent, how can you say that it is not?
I cannot think of a better test. The only weak point as I see it does not say anything one way or the other about intelligence that is fundamentally different from our own (for example: doesn't happen to use natural language).
I actually agree, but for different reasons. I don't think human judges (That is, NORMAL human judges, non-geeks.) are a good measure of an entities intelligence. I personally have managed to convince at least one person that I'm a machine.
Sufficiently good pattern matching to produce reasonable-enough sounding sentences would probably fool most casual observers. For me, the validity to a turing test relies heavily on how long and under how much pressure the A.I has to keep up it's illusion of humanity.
A more objective test of intelligence would be nice though.
Yes, I meant the Chinese Room. I couldn't remember the exact term. Thank you for that!
One test I heard of that sounded pretty promising is the ability of the computer to discover new patterns in existing data. Not just pattern matching, because that's possible without intelligence, but to discover new patterns from the existing data and use those new patterns to correctly predict the future. Now that's incredibly hard to do!!
Isn't the Turing Test strictly more powerful than the test you describe, because it can be administered as part of a Turing Test? It seems like humans find new patterns all the time.
For example, if someone gave me a ton of rules on how to convert sentences from English to another language, and the output was amazing, would that mean I'm intelligent? Not likely, just that I can follow the language conversion rules.
I think the issue is that too much emphasis is being put on Turing tests as a measure of intelligence when in fact it's really more a measure of how well a computer can follow conversations and social norms. Just because you can fool a real person, doesn't mean you're intelligently interacting with that person. Just like if I can fool another person I can speak another language by following translation rules, it doesn't actually mean I can speak the other language at all!