I've felt this way about every Sam Altman piece that's been posted in the past week (and possibly every one I've ever read). And I feel guilty because PG speaks so highly of him. And then I feel guilty for feeling guilty.
don't feel guilty. I'm blogging to practice writing. It surprises me at least as much as you when articles like this do well on HN and makes me feel embarrassed I didn't make them better (there are some posts I work really hard on and hope people like, but this was not one of them)
If you're interested, you should read On Intelligence[1] by Jeff Hawkins (inventor of the Palm Pilot). In it, Hawkins presents a compelling theory of how the human brain works and how we can finally build intelligent machines. In fact, Andrew Ng's Deep Learning research is built on Hawkin's "one algorithm" hypothesis.
I think you are an excellent blogger and glad that you are posting to HN.
That said, I hope that you will think more critically and clearly before publishing vague, fuzzy, uninformed, and unlogical thoughts (not illogical, but unlogical) like the following:
>The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?
Consciousness, creativity, and desire are all quite distinct things. It is very important for people who are attempting to approach the coming reality of artificial intelligence to be able to distinguish between different things like that.
There have been computer programs that decide what they want to do for decades. Perhaps you were thinking of a specific human-like type of decision process, but if so, you must say so and reason that way. Otherwise you are just conveying some fuzzy thoughts. And the problem is that you are doing so in the context of real scientific undertakings with results directly applicable to your thoughts.
A computer deciding what to care about or learn or what behavior to engage in "on its own" is related to the previous topic you mention, and in and of itself, does not require artificial general intelligence.
How do we make a computer program write a novel? I think that is a good question and an effective answer to it I believe _might_ be in the category of 'real' artificial general intelligence. However, I think that it will probably soon be possible to create 'narrow' AIs that can generate novels without being generally intelligent. http://www.nytimes.com/2011/09/11/business/computer-generate...
Artificial general intelligence is not coming in the next, say, 30 years. And I am a big fan. A quick analogy: note that we can't even build an ant. It will take decades after that accomplishment to build a human level intelligence.