I am not easily impressed. We wouldn't have any need for driverless cars in the first place, if we just had well designed designed cities. Furthermore, Siri and Watson are just symbol shifting applications which work for iOS and SUSE Linux. They are not much more impressive then similar symbol shifting applications that we available a few decades ago.
I don't believe that the von neumann architecture, of which all the applications you mentioned are a part, is ever going to yield anything more then applications with sophisticated symbol shuffling systems which are actually totally stupid behind the scenes. At least before the AI winter, we had impressive AI hardware, including the Lisp machines.
Now just because we have done a pretty poor job doesn't mean things have to continue to be this way in the future. DARPA is currently working to develop memristors, which may become the future of AI. Sadly, DARPA isn't getting nearly enough funding.
if "sophisticated symbol shuffling" is sufficient to beat the world's best chess players, the world's best Jeopardy players, beat the world's best pilots, and safely drive a car while following all the rules of the road, what makes you think it's not sufficient to hold a human-like conversation? I'm pretty sure people are also what you would call "actually totally stupid behind the scenes" - the main difference is that we didn't design human brains so we aren't (yet) able to understand much of what they're doing. But understanding isn't even necessary for replication.
The fact that people can think proves thinking is possible for mechanical systems; in the worst case we'll re-implement a human brain without understanding it.
> At least before the AI winter, we had impressive AI hardware, including the Lisp machines.
Oof. You're talking about machines with less processing power than an iPhone. Yeah, my dad programmed Shakey the Robot in Lisp on pretty good hardware for the time. If you told Shakey to "push the block off the platform" it could do the task, but would take 20 minutes of thinking about it beforehand. Whereas the Google cars can drive in real time at freeway speeds and robots made by high school kids today can play soccer. How is that not more impressive?
Given how pathetic the Lisp machines were compared to what we've got now, what makes them so impressive to you?
> "sophisticated symbol shuffling" is sufficient to beat the world's best chess players
Try to beat even amateur players in 19x19 go. Von Neumann machines are inherently stupid, the domains you mention have very simple rules which allow symbol shifting systems to succeed. You may be able to convince a gullible person of the intelligence of a machine, but that is just cheating the test.
> Oof. You're talking about machines with less processing power than an iPhone.
I tend to agree with John McCarthy that "the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster." [1] The computers of the late eighties were a sufficiently fast platform for AI if just knew how to program intelligent behaviours in them properly. As such, the Lisp machine hardware was adequately fast, and since the software on them had considerable advantages over what we have today, they were able to do many things at about the same level as modern computers.
> The fact that people can think proves thinking is possible for mechanical systems; in the worst case we'll re-implement a human brain without understanding it.
The human brain has over a septillion atoms [2]. Someday we will duplicate the behaviour of these atoms in a computer, but I don't see that happening anytime soon. In general, I highly doubt there will be significant technological progress well our system of production for profit continues to produce recessions, depressions, and winters.
> Given how pathetic the Lisp machines were compared to what we've got now, what makes them so impressive to you?
The Lisp machines had a consistency and clarity of behaviour that resulted from the use of Lisp all the way down that is absolutely unmatched. You could modify the behaviour of any object in memory down to the machine level using just Lisp. Every object was stored in a single address space. What modern computer system compares to the Lisp machines in these respects? I would love to know, I will be the first to adopt a sanely designed computer platform for my own uses.
However, what I have seen so far is that companies like Microsoft, Apple, and Google are actively trying to replace the programmable computer with displays for external cloud services. The prevalence of these private corporations which are devoted to the pursuit of short-term profit, as opposed to government research agencies such as DARPA, has significantly held back technological progress.
>Try to beat even amateur players in 19x19 go...the domains you mention have very simple rules which allow symbol shifting systems to succeed.
Computers already play go at an amateur level now - 2-5 dan. And are gradually improving with better algorithms and better CPUs. But go is a game with extremely simple rules - simpler than chess - and an inherently simple possibility space - simpler than Jeopardy - so if somebody were to build a program tomorrow that could beat all the world's best go players, wouldn't you then just dismiss this as not demonstrating true intelligence because we'd then know that go has "very simple rules which allow symbol shifting systems to succeed"?
Winning at Jeopardy is far more impressive with regard to demonstrating "intelligence" than winning at go would be. Jeopardy questions are limited by the english language and human creativity; the relevant info that could be brought to bear in answering these questions is the entirety of knowledge the humans playing the game might have, including knowledge of puns and wordplay. Whereas the go search space, while quite large, is something any programmer could model - just mappings of one 19x19 three-state grid to another. Go is theoretically solvable in a way that Jeopardy is not.
>The human brain has over a septillion atoms
If the individual atoms were all individually crucial to producing thought that would be relevant. But it seems rather unlikely that they do. We need to replicate the relevant properties of neurons, not their exact makeup. Suppose you need to replace a broken hipbone or jawbone. There's a whoooole lot of atoms in a piece of bone, but you can replace it with any compound that has suitable physical properties. What matters when replacing bone is characteristics such as strength, flexibility, and wear resistance. Whatever you replace the bone with will also have a lot of atoms, but knowing the exact number of atoms and where they were wasn't necessary to replacing the functionality.
I don't believe that the von neumann architecture, of which all the applications you mentioned are a part, is ever going to yield anything more then applications with sophisticated symbol shuffling systems which are actually totally stupid behind the scenes. At least before the AI winter, we had impressive AI hardware, including the Lisp machines.
Now just because we have done a pretty poor job doesn't mean things have to continue to be this way in the future. DARPA is currently working to develop memristors, which may become the future of AI. Sadly, DARPA isn't getting nearly enough funding.
http://www.engadget.com/2009/07/14/are-memristors-the-future...
To be realistic, Ray Kurzweil's conviction that there will be strong AI in his lifetime is just wishful thinking. Feel free to prove me wrong.