All of Kurzweil's predictions are based on extrapolating exponential growth.
That's all very well, but exponential growth in physical systems is usually restricted within limits. In a such a system the negative feedback may also be growing exponentially, which means although initially it may be too small to be noticed, after the growth passes some boundary the negative feedback becomes relevant and the overall growth is no longer exponential.
Unfortunately it's impossible to tell where we are on the growth graph (Although some claim that probability suggests that we are closer to the end. See http://en.wikipedia.org/wiki/Doomsday_argument). Kurzweil makes the assumption we are at the beginning of the growth curve. We could be at the end where the negative feedback is about to overtake and growth will slow.
There will be limits. The speed of light could be a hard limit on computing speed. Or ultimately heat-death could be the hard limit, but there is a limit somewhere. The question is how close are we to the limit and that is something we are only likely to know when we reach it.
Building some intelligent machine is still possible within this limit. There is an example that it is possible: the human brain.
But you are right that we could be more far away from that than what we think. And also that it may be that the limit of intelligence of some machine might actually not be so much be ahead of human intelligence. (Whereby, personally, I think that it should be theoretically and practically possible to build much more intelligent machines at some point.)
I agree with you entirely, I'm sure strong AI will be reached. Even if we dropped down onto linear growth, I think strong AI isn't that far away. I just think exponential growth can't continue indefinitely as Kurzweil predicts.
But seriously, I am not sure it can be said to be near or far. Think of it analogously to physical capability. Have we developed 'strong' Artificial Physical Ability? Do we define and measure our physical machines in comparison to human physical abilities? Was the aim of inventing machines just to make artificial humans? No, we develop every kind of physical machine, and in fact we really aim to make all the kinds of things that are not like what humans do.
So why shouldn't informational machines be the same? Is what humans do with information the only thing possible to do, and the only thing we might want to do? No. We won't be making AI to be like humans much, but for a great range of other, non-human-like, applications. Using human intelligence as a single simple measure just will not work or mean anything much.
We think of 'strong AI' as being an ultimate image of the future, but really it diverts us from imagining the much greater range of possibilities the future really holds.
I think his proposition is roughly, that a singular technology grows on an S-curve, but as each one technology slows another picks up the baton and runs with it, and the totality (sum of S-curves) has historically looked exponential, and promises to continue.
Ahh, now that is an interesting angle, but that would suggest that the rate of introduction of new technologies is linear and that every new technology must have a period of exponential growth initially.
(Or I suppose... That the rate of introduction of technologies with initially exponential growth rates is linear. You could ignore those without exponential growth rate provide that a constant number did.)
I think basically at any one time there are a lot of competing new-born technologies, and it's only in retrospect that we can see which of them will go exponential through a feedback cycle of improvement and growing adoption.
It's not so much that technology always goes exponential. It's that in retrospect, we notice the ones that did.
This line: "Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly."
He doesn't bracket his law with exponentially to a 'certain point' or 'is currently happening'. He claims it is a law that all technological progress currently happens and will always happen exponentially.
Perhaps I am misreading him, but I interpret predictions as claims that exponential growth will always continue.
Anyway. That aside, You can't use an exponential curve to make predictions about the future if you accept that the curve will end at some point. If you accept the curve will change and you can't know the point of change then you can't use it to make a prediction.
Neither have non-dualist priors. The nature of consciousness is as mysterious as it ever was. We have cracked a wide range of the soft problems, but the hard problem remains.
The "hard problem" of consciousness only exists if you start out with a dualist prior. Otherwise it is mysterious in the same way as the non-symmetry of matter and antimatter is mysterious -- it is not explained.
Lightning may look perfectly suitable for scientific investigation now, but it was as much a "hard problem" in other times.
The hard problem does not depend on our scientific understanding of the material world. Comparing the current situation in philosophy to the situation in physics 400 years ago is a false analogy, which doesn't take the fundamental difference between science and philosophy into consideration. Philosophy is about how we humans conceive the world, while physics attempts to describe a world seperate from our perception. As the failure of the object-subject duality has shown, that is impossible. There is no 'real', 'external', 'absolute', 'underlying' world to describe, because talking about it doesn't make any sense. We aren't brains in an 'absolute reality'. If you keep thinking about it in that way, you fundamentally misunderstand the key philosophical issues surrounding the hard problem.
I mean I was going to say "'mere'" as in emphasise that the the use of mere wasn't belittling as the scope of machines working on established physical principles is clearly pretty huge.
He doesn't assume all that much. In his book The Singularity Is Near Kurzweil looks at the physical limits of computing, to determine how long we can go before Moore's Law comes to an end. He concludes that we've got another fifty years, or possibly seventy if certain technologies prove feasible. As I recall he doesn't count on quantum computing much, but does think we'll manage reversible computing to solve heat dissipation issues.
That's just computation, but computation drives much of our other technology, and will even more once computers get smarter than people. Kurzweil estimates the timeframe for that based on a range of estimates for the computational capacity of the brain.
I think with the advent of quantum computers a limit in computing power is still far way. Nonetheless, what has brute computing power enabled us to do so far? There have been a few prestigious projects, ie. deep blue, seti, cern, molecular folding, etc. which take advantage of this power. But most research projects profit little from an increase in computing power. I think new software and algorithms play a bigger role in trying to enable machines to solve the more "human" tasks.
Increases in computing power affect the whole chain - you're looking at the high end, where the limit of computing power is pushed but you also need to remember that computing power for a given cost increases across the whole spectrum of computing devices.
My phone can listen to what I say, translate it into French, and speak it back to me, albeit leveraging external compute power to do most of the heavy lifting. But it wouldn't be possible to provide that external compute power at scale 20 years ago.
I'm also sure that countless small innovations in bioinformatics add up collectively to significant changes over time, and those individual innovations are powered by the more broadly available increases in computing power.
DNA sequencing, and everything we do with it, would be pretty painful without the computing power we have.
We've also gotten the ability to do much better fluid dynamics modeling, astrophysics simulations (whether you believe them or not), climate models (likewise). I've made use of the processing power we have now to do a bunch of "experimentation" in representation theory, though that's not obvious from the resulting writeup, which is more or less algebraic proof.
In general, for a lot of research areas where we think we have a decent model of some of the things that are going on more computing power means more ability to use a computer to look for things to actually try in the lab (and thus refine the model) as well as more ability to collect and handle data.
And then there's the mundane bits, like being able to find existing work more easily, being able to typeset and disseminate your papers more easily and so forth.
I wasn't aware quantum computers had actually been invented yet. They are still theoretical devices.
Quantum computation as been explored but as yet we don't have a computer capable of executing the quantum algorithms.
That aside, all I'm saying is that exponential growth has a limit. That growth could be measured in MIPS or in algorithm performance, it doesn't really matter, but I don't think that growth will continue exponentially.
"The question is how close are we to the limit and that is something we are only likely to know when we reach it."
You've posed the question and then immediately explained why it's fruitless to ask.
Even if you are right that there must be limits; if you have no idea when his models are likely to break down then your skepticism is no more solid than his prediction.
Sure, I think Kurzweil does a decent job in framing a future within which there's plenty of room for innovation with no 'limits' to exponential growth in sight.
This whole issue of limitations is a separate prediction in and of itself, and it's not his.
My scepticism is of the implicit claim that you make predictions by extrapolating exponential growth curves. You can't, you can only make predictions up to the point the exponential growth breaks down, and as you can't predict that point before it happens you can't make predictions.
I'm not saying he is wrong about the next 100 years, the singularity or AI, I'm saying may be wrong about the short term and he certainly can't be right indefinitely.
"I'm saying may be wrong about the short term and he certainly can't be right indefinitely."
Sure he can, since you're the one who added the claim that it will continue indefinitely. What he says instead is that progress will continue until well beyond our point to predict what the resulting society will look like, due to the claimed fact that it will for instance include things like true AIs and brain uploading. I do not recall him talking about where the progress will stop; probably because it would be meaningless to us anyhow. If an AI from 2200 came to us now and tried to explain the latest cutting edge trends in the research into the ultimate limits of cognition we wouldn't get past the first paragraph.
Don't worry, you're hardly the only one to dismiss his claims without actually stopping for a moment and figuring out what he's actually claiming. I'm not exactly a strong Singulatarian myself but a lot of people really need to stop reading other people summaries of what he says (very very few are accurate enough to come to evaluate what he is saying) and read past his first couple of paragraphs before flipping the bozo bit. He may not be right, he's definitely not an idiot.
Ok, so your saying that Kurzweil's prediction is simply that exponential growth will continue until beyond the singularity?
If that is the case I don't really see how there is any logic in his claims. I understand that he has extrapolated an exponential curve, and I can see the potential of future tech if that exponential growth was to continue, but I don't really see what basis he has to claim that the exponential curve will not end tomorrow. I'm not saying that it will end tomorrow, I'm just saying that you can't base a prediction on the idea that exponential growth will continue because you have absolutely zero data about when it's likely end is.
"Ok, so your saying that Kurzweil's prediction is simply that exponential growth will continue until beyond the singularity?"
Why don't you stop waiting for me to tell you and spend some honest time with the ideas? Of course you don't see how there is any logic to his claims, you haven't seen his claims at all.
Or, alternatively, realize you don't know what they are and decide not to worry about it. This is fine too. No joke. There are all kinds of times when I take this option. It's not bad to not know somebody's opinions, or criticize them when you know them; the problem is in the criticism when you don't actually know them.
That's all very well, but exponential growth in physical systems is usually restricted within limits. In a such a system the negative feedback may also be growing exponentially, which means although initially it may be too small to be noticed, after the growth passes some boundary the negative feedback becomes relevant and the overall growth is no longer exponential.
Unfortunately it's impossible to tell where we are on the growth graph (Although some claim that probability suggests that we are closer to the end. See http://en.wikipedia.org/wiki/Doomsday_argument). Kurzweil makes the assumption we are at the beginning of the growth curve. We could be at the end where the negative feedback is about to overtake and growth will slow.
There will be limits. The speed of light could be a hard limit on computing speed. Or ultimately heat-death could be the hard limit, but there is a limit somewhere. The question is how close are we to the limit and that is something we are only likely to know when we reach it.
See - http://en.wikipedia.org/wiki/Exponential_growth#Limitations_...