Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
By 2029 no computer will have passed the Turing Test (longbets.org)
34 points by dmitriy_ko on Feb 10, 2012 | hide | past | favorite | 52 comments


By 2029, and increasing number of humans will fail the Turing Test.


this is just brilliant. i just had to write a comment to appreciate.


I haven't yet seen bot that could answer question about specific setting that you are in when talking to it. for example if i ask bot to repeat what i ask it just before this. it can't tell. because bots like cleverbot operates by crowdsourcing answers and than parroting it randomly. Also when i ask it same question multiple times I get different answers. Maybe bots should store procedures in their database and not just crowdsourced answers. and keep context of current session. or something.

They maybe passed the turing test But that just convinced me that Turing test really means nothing. You can't really determine anything for sure with turing test but only probability. (many humans acts or are stupid, if machine is stupid some may decide that machine is like human)

What i would like to see more instead of machine passing turing test is building machine species that could survive in nature, if it can survive autonomously like some kind of animal than this machine species could be said to be intelligent.


Lets assume that humans are complex molecular machines. That is our bodies obey the laws of physics and are made of atoms that behave in predictable ways.

If we know what we are made of (the molecules and how they are arranged) and how these molecules can be modeled (i.e. quantum mechanics) then it is only a matter of time until an entire human can be modeled on a computer. Once you can model the entire body, then you can have a computer that can pass the Turing test, because it more or less is human.

If humans can be modeled as deterministic systems that follow physical laws, then computers will simulate them at some point in the future.


There are two problems that I can see here.

First, the complexity in the physics/chemistry of our molecular machines is such that "only a matter of time" may extend longer than the time span of our species' existence.

Second, it is completely possibly that we are more than molecular machines. There may be an aspect to our functioning that is beyond the physical. That aspect of our existence may not be possible to replicate.


> There may be an aspect to our functioning that is beyond the physical. That aspect of our existence may not be possible to replicate.

What does "beyond the physical" even mean? I assume you mean something "supernatural," meaning something that is "true/real," yet impossible to study or even verify empirically.


Beyond the physical meaning properties that are unable to be replicated by manipulating physical atoms and molecules. I wouldn't call them "supernatural" at all because (assuming they do exist) they are absolutely a natural and even necessary part of life.

I would assume they would be possible to verify in some way once we've reached the stage where hard AI is nearly achieved.


You imply that atoms and molecules are elementary particles, which we already know to not be the case. It's possible that quantum effects, which seem to be probabilistic (depending on your interpretation), are vital for human intelligence. However, that seems fairly unlikely. While it's still mostly speculation, most experts tend to think that classical mechanics are sufficient to explain the functions of the human brain, which would imply that an identical arrangement of atoms/molecules would also be an intelligent system, and a good computer simulation (which is currently still unfeasible) of the atoms/molecules would also be an intelligent system.

http://en.wikipedia.org/wiki/Quantum_mind


How could molecules and atoms both be elementary particles? I'm making no such implication. Nor did I mention anything about quantum effects.


The problem here is that you are being so vague that your comment can really only be understood if actually are talking about something 'supernatural'.

Biology is really not much more than a self-replicating subset of chemistry. Without evidence to suggest something sort of psuedo-supernatural difference between the simplest of single cell organisms and homo sapiens, I think the only reasonable position is that whatever it is that "preforms" intelligence falls under the realm of the physical sciences.

Until we acquire evidence to suggest otherwise, it is silly to assume anything else.


I agree, that is a very reasonable position. But we are still a long way from proving the position. And until the position has been demonstrated true, then there is room for speculation. Which is what longbets is all about.


The burden of proof is definitely on anybody asserting that intelligence isn't a physical phenomenon. We have no evidence to suggest that it isn't, but a hell of a lot to suggest that it is.

Longbets is fine, but such unfounded speculation doesn't have much use in serious conversation.

"proving the position proving the position"

It seems like you might be misunderstanding the role of science.


If it's not supernature, than science can evaluate it and in time even understand it.

After that, it's just a question of engineering.


There are completely natural phenomena that have been studied and evaluated for centuries and still have no concrete explanation. We may not have sufficient time to answer some of these questions, and we may also reach a limit to how much we can understand and discover about ourselves.


Again, you are speaking in terms that make it difficult to extract your meaning, so correct me if I have failed to understand what you are trying to say.

In particular I would be interested to hear what some of these phenomena are. See, the lack of a "concrete explanation" isn't really a useful thing to say about something. We don't yet have anything that could be called a "concrete explanation" of even things like gravity. What we do have are theories that allow us to make useful predictions about how gravity will effect things. Imperfect as those, like all, theories are, we've still put stuff on other planets with them.

There certainly are many "open problems" in science, but to my knowledge we have yet to ever encounter anything that is fundamentally beyond the grasp of the physical sciences. Take for example the Coronal Heating Problem: On the surface it seems to violate the 2nd law of thermodynamics (which is some rather heavy shit...) but even in extreme cases like that science solders on. There are numerous hypothesis that have been proposed to explain it, and over time our understanding of it will only become more robust.

Anyway: To address this particular concern I'd like to say that until we have any strong evidence to suggest that humans possess a more powerful model of computation than we already regularly reason about, we have to assume that they do not.


Well let's take your example of gravity. We know a lot about it. Like you said, we've put stuff on other planets. And we have satellites in orbit. Amazing. But even with that we're still theorizing about exactly how it works. And we definitely can't recreate it.

It's possible that there are hard limitations on our ability to recreate some of the very fundamental phenomena we experience in everyday life. Intelligence may be among them.


The only theoretical hard limit that makes sense is a limit on information. If you can't transmit information from A to B faster than C, then yes you've got a hard limit.

Every other natural phenomenon that doesn't obscure infromation that way can be figured out. There may of course be soft limits like it taking a billion years to figure out.


That was my entire point with bringing up gravity...

Anyway, comparing creating intelligence to creating gravity is nonsensical. I literally cannot make sense of that.


> Coronal Heating Problem

Serendipitously to you mentioning this now, only a couple of weeks ago they announced they have a good candidate for the explanation, which is Type II spicules.

http://www.nsf.gov/news/news_summ.jsp?cntn_id=118338&WT....


Theres far more interesting implications for humanity riding on this bet than twenty thousand dollars going to charity.


That is assuming the Turing test is a good test for intelligence.


Cleverbot, supposedly has passed the Turing test. I have my doubts, but there is something to say about this Internet accomplishment. There will always be doubt about passing this test as it is not mathematical proof, which is probably why Turing proposed it.


Cleverbot, supposedly has passed the Turing test.

I seriously doubt it. Here's a conversation I just had with it:

Me: What's your name?

CB: My name?

Me: Yes.

CB: What's your name?

Me: Suzan.

CB: Nope.

Me: Nope what?

CB: Nope, allessander is not my name.

This is sub-Eliza quality...


The version brought to the test had more computing power for the one conversation, and other tune-ups IIRC.

EDIT: http://cleverbot.com/human

"Cleverbot was given more processing power for this test than it can be online. It had two dedicated, fast computers with solid state drives while talking to just 1 or 2 people at once. Online there are often 1000 people talking to each machine. We know you'd all love to talk to it the powerful version, but we need a lot more servers first! "


Thanks for the link. I had similar thoughts to the original downvoted commenter. I talked to one of the various IM bots in 2006, and they've been around a lot longer than that, and I thought that apart from its constant advertising to me it was remarkably human-like especially when it sometimes used 'u' instead of 'you'. If it had more processing power and memory, a better memory, and didn't have a goal of making money through advertising, I thought back then it had a decent chance at fooling humans.

That's all the Turing Test is, it's fooling humans, and humans are incredibly easy to fool--you don't need human level intelligence to fool one. Just several months ago I only realized that a response I got on a craigslist posting was made by a bot after I replied and it replied back with an almost word-for-word copy of its original message (and urged me to fill out a form of personal info). Simple hacks like avoiding repeating the same things, remembering information shared to whom and to you from whom, and the occasional intentional grammatical error go a long way to trick the human. I think we could have had multiple Turing Test passes over the past couple years if that was actually an important goal. But I think most people in AI realize there are more interesting problems to work on than AI PR so projects like CleverBot aren't given top priority.


Andrew Ng's work is pretty neat: http://youtube.com/watch?v=ZmNOAtZIgIk


Why would a computer pass the turing test? We haven't made much progress in building intelligent machines since the late AI winter.


> We haven't made much progress in building intelligent machines since the late AI winter.

We haven't?

We got the AI winter because DARPA was for a while willing to fund projects that made outrageous promises, so they got outrageous promises that nobody could possibly deliver on and eventually gave up.

I mean, they were funding people who wanted to do things like build semi-autonomous robots that could drive a car to deliver supplies or fly a plane to do air reconnoissance or wheel around to sweep a field for mines. They funded claims that computers would be able to do speaker-independent voice recognition. Computer programs could beat the best human players at a game of chess. Or imagine a computer program that had enough common knowledge indexed and accessible that it could win contests that involve wordplay and trivia questions.

Wait, all that stuff has happened already, much of it just in the last few years.

The real problem with AI is it's defined to be "stuff we can't do yet". As soon as we manage to do one of those things, it stops being called "AI".

In short, I like Kurzweil's odds on this one. 2029 is a long time away in computer years and we've made a heck of a lot of progress. And he's right that humans think linearly. Exponential growth curves just aren't intuitive to us, so we underestimate what a reasonable amount of progress on a long-term goal looks like. (IBM's win at Jeopardy, Apple's Siri launch, and Google's self-driving cars are all things that happened since Ray made his prediction. Are these not AI progress?)


I am not easily impressed. We wouldn't have any need for driverless cars in the first place, if we just had well designed designed cities. Furthermore, Siri and Watson are just symbol shifting applications which work for iOS and SUSE Linux. They are not much more impressive then similar symbol shifting applications that we available a few decades ago.

I don't believe that the von neumann architecture, of which all the applications you mentioned are a part, is ever going to yield anything more then applications with sophisticated symbol shuffling systems which are actually totally stupid behind the scenes. At least before the AI winter, we had impressive AI hardware, including the Lisp machines.

Now just because we have done a pretty poor job doesn't mean things have to continue to be this way in the future. DARPA is currently working to develop memristors, which may become the future of AI. Sadly, DARPA isn't getting nearly enough funding.

http://www.engadget.com/2009/07/14/are-memristors-the-future...

To be realistic, Ray Kurzweil's conviction that there will be strong AI in his lifetime is just wishful thinking. Feel free to prove me wrong.


if "sophisticated symbol shuffling" is sufficient to beat the world's best chess players, the world's best Jeopardy players, beat the world's best pilots, and safely drive a car while following all the rules of the road, what makes you think it's not sufficient to hold a human-like conversation? I'm pretty sure people are also what you would call "actually totally stupid behind the scenes" - the main difference is that we didn't design human brains so we aren't (yet) able to understand much of what they're doing. But understanding isn't even necessary for replication.

The fact that people can think proves thinking is possible for mechanical systems; in the worst case we'll re-implement a human brain without understanding it.

> At least before the AI winter, we had impressive AI hardware, including the Lisp machines.

Oof. You're talking about machines with less processing power than an iPhone. Yeah, my dad programmed Shakey the Robot in Lisp on pretty good hardware for the time. If you told Shakey to "push the block off the platform" it could do the task, but would take 20 minutes of thinking about it beforehand. Whereas the Google cars can drive in real time at freeway speeds and robots made by high school kids today can play soccer. How is that not more impressive?

Given how pathetic the Lisp machines were compared to what we've got now, what makes them so impressive to you?


> "sophisticated symbol shuffling" is sufficient to beat the world's best chess players

Try to beat even amateur players in 19x19 go. Von Neumann machines are inherently stupid, the domains you mention have very simple rules which allow symbol shifting systems to succeed. You may be able to convince a gullible person of the intelligence of a machine, but that is just cheating the test.

> Oof. You're talking about machines with less processing power than an iPhone.

I tend to agree with John McCarthy that "the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster." [1] The computers of the late eighties were a sufficiently fast platform for AI if just knew how to program intelligent behaviours in them properly. As such, the Lisp machine hardware was adequately fast, and since the software on them had considerable advantages over what we have today, they were able to do many things at about the same level as modern computers.

> The fact that people can think proves thinking is possible for mechanical systems; in the worst case we'll re-implement a human brain without understanding it.

The human brain has over a septillion atoms [2]. Someday we will duplicate the behaviour of these atoms in a computer, but I don't see that happening anytime soon. In general, I highly doubt there will be significant technological progress well our system of production for profit continues to produce recessions, depressions, and winters.

> Given how pathetic the Lisp machines were compared to what we've got now, what makes them so impressive to you?

The Lisp machines had a consistency and clarity of behaviour that resulted from the use of Lisp all the way down that is absolutely unmatched. You could modify the behaviour of any object in memory down to the machine level using just Lisp. Every object was stored in a single address space. What modern computer system compares to the Lisp machines in these respects? I would love to know, I will be the first to adopt a sanely designed computer platform for my own uses.

However, what I have seen so far is that companies like Microsoft, Apple, and Google are actively trying to replace the programmable computer with displays for external cloud services. The prevalence of these private corporations which are devoted to the pursuit of short-term profit, as opposed to government research agencies such as DARPA, has significantly held back technological progress.

[1] http://www-formal.stanford.edu/jmc/whatisai/node1.html

[2] http://www.quora.com/Human-Brain/How-many-atoms-are-in-the-h...


>Try to beat even amateur players in 19x19 go...the domains you mention have very simple rules which allow symbol shifting systems to succeed.

Computers already play go at an amateur level now - 2-5 dan. And are gradually improving with better algorithms and better CPUs. But go is a game with extremely simple rules - simpler than chess - and an inherently simple possibility space - simpler than Jeopardy - so if somebody were to build a program tomorrow that could beat all the world's best go players, wouldn't you then just dismiss this as not demonstrating true intelligence because we'd then know that go has "very simple rules which allow symbol shifting systems to succeed"?

Winning at Jeopardy is far more impressive with regard to demonstrating "intelligence" than winning at go would be. Jeopardy questions are limited by the english language and human creativity; the relevant info that could be brought to bear in answering these questions is the entirety of knowledge the humans playing the game might have, including knowledge of puns and wordplay. Whereas the go search space, while quite large, is something any programmer could model - just mappings of one 19x19 three-state grid to another. Go is theoretically solvable in a way that Jeopardy is not.

>The human brain has over a septillion atoms

If the individual atoms were all individually crucial to producing thought that would be relevant. But it seems rather unlikely that they do. We need to replicate the relevant properties of neurons, not their exact makeup. Suppose you need to replace a broken hipbone or jawbone. There's a whoooole lot of atoms in a piece of bone, but you can replace it with any compound that has suitable physical properties. What matters when replacing bone is characteristics such as strength, flexibility, and wear resistance. Whatever you replace the bone with will also have a lot of atoms, but knowing the exact number of atoms and where they were wasn't necessary to replacing the functionality.


> To be realistic, Ray Kurzweil's conviction that there will be strong AI in his lifetime is just wishful thinking. Feel free to prove me wrong.

Well that's what the bet is about.


Using any functional, practical definition of "artificial intelligence" that I've ever heard, we have certainly made a lot of progress since the late AI winter.

It sounds like you're using the unfortunate definition that essentially defines any task as "not requiring intelligence" the instant a machine is able to perform it well. This has been done with voice recognition, facial recognition, music composition, etc., and is actually one of the main reasons we even had the AI winter.


I don't think that's a very good argument. Okay, so let's say we used to think it took intelligence to recognize voice/face. Then we figured out a way to do it! Oh man, intelligence, right? Well, no. We realized that you can stick the voice through a spectrograph and measure the peaks, that's certainly not intelligence. And that you can detect the edges of bits of faces pretty easily and then compare various ratios for similarity. That's clever on the part of the algorithm designers but nothing intellect-related.

I've never heard anyone say that music requires intelligence. Creativity maybe. But I can make a very simple program that has a bunch of hard-coded music patterns that it picks from and nests randomly. It can 'creatively' output trillions of different songs and clearly has nothing even approaching intelligence.

Intelligence is not about what an algorithm can mechanically do. It's about comprehensive world-modeling that can predict and communicate with other agents. If something can be coded in a month by a grad student to run on an 8086 then I feel comfortable saying it's not AI.

Edit: added the word mechanically to be clearer


When it comes to face/voice recognition, what makes you think people are doing anything more clever? Our brains seem to have edge and shape detection modules too, do they not? Is it not knowing the algorithm that makes what we do seem "intelligent?


There seems to be a miscommunication here. I don't think people are doing anything particularly clever with regards to facial recognition. I think intelligence is based on complex, abstract, general reasoning. Physical object recognition is a useful input filter but not evidence of intellect.

It's not knowing the algorithm at all. I'm just pretty sure that the algorithm for intelligence is more complex/further from our knowledge than a contemporary grad student could do. I'd love to be proven wrong by a PhD thesis with attached AI.


What I meant to state in my post is that so far I have not seen "a lot of progress." Feel free to show me anything that may change my mind.


You are forgetting Clippy, the Office assistant.


But I already have, human dudes


Despite being so clever, Ray Kurzweil is also an obsessed idiot on the topic of AI and the Singularity.


While I wouldn't go so far as to call him an "idiot," I do think he should be shifting his focus towards emergent phenomena on the Internet rather than a single silicon mind in a box. He's stuck in a 20th century perspective.


We also have to remember that the Turing test isn't a real measure of intelligence. After all a computer following rules is NOT any more intelligent than the rules themselves.

For example, if someone gave me a ton of rules on how to convert sentences from English to another language, and the output was amazing, would that mean I'm intelligent? Not likely, just that I can follow the language conversion rules.

I think the issue is that too much emphasis is being put on Turing tests as a measure of intelligence when in fact it's really more a measure of how well a computer can follow conversations and social norms. Just because you can fool a real person, doesn't mean you're intelligently interacting with that person. Just like if I can fool another person I can speak another language by following translation rules, it doesn't actually mean I can speak the other language at all!


I disagree with your conclusion that such a system is inherently not intelligent.

If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. To do so would expose the fact that your definition of "intelligence" secretly includes the class "...and is a human," which makes "intelligent machine" a contradiction of terms. It is essentially an example of the No True Scotsman fallacy, because you're revealing a new facet of your claim when faced with an apparent counterexample.

If you're defining "intelligence" to be a purely human trait, then come right out and say so, and everyone will agree that on your terms a machine cannot be intelligent. Of course, I would argue that such a definition isn't very useful, since it basically means that the adjectives "intelligent" and "humans" are synonyms.


"If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. "

I would say that passing the Turing test is a necessary but insufficient measure of complete human intelligence. It would be astounding, and a major feat in the field, but there must be more to the definition of human intellect than simply carrying on a text conversation.


I would think it depends entirely on the subject matter of the text conversation. It's probably possible today to make a chat bot that can converse about (nothing other than) the weather.


If humans can't recognize intelligence, then we won't be able come up with some other test to recognize intelligence.


Agreed. But we'd have to agree on what standard of intelligence we're considering. Turing is only one standard. And I would say not a very high one if being compared to human intelligence.


Eh.. The Turing test (Turing was a man, and the Turing test is a concept posed by him) is restricted in the way that it is to eliminate irrelevant factors such as robotics.

The concept itself that is presented is quite sound: If you cannot tell that it's not intelligent, how can you say that it is not?

I cannot think of a better test. The only weak point as I see it does not say anything one way or the other about intelligence that is fundamentally different from our own (for example: doesn't happen to use natural language).


I assume your talking about the Chinese Room? (LINK: http://en.wikipedia.org/wiki/Chinese_room)

I actually agree, but for different reasons. I don't think human judges (That is, NORMAL human judges, non-geeks.) are a good measure of an entities intelligence. I personally have managed to convince at least one person that I'm a machine.

Sufficiently good pattern matching to produce reasonable-enough sounding sentences would probably fool most casual observers. For me, the validity to a turing test relies heavily on how long and under how much pressure the A.I has to keep up it's illusion of humanity.

A more objective test of intelligence would be nice though.


Yes, I meant the Chinese Room. I couldn't remember the exact term. Thank you for that!

One test I heard of that sounded pretty promising is the ability of the computer to discover new patterns in existing data. Not just pattern matching, because that's possible without intelligence, but to discover new patterns from the existing data and use those new patterns to correctly predict the future. Now that's incredibly hard to do!!


Isn't the Turing Test strictly more powerful than the test you describe, because it can be administered as part of a Turing Test? It seems like humans find new patterns all the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: