Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
I am a theist because I believe that human beings - flawed, fallible, finite, and mortal - derive ultimate meaning and hope from believing in something that is bigger than themselves. Now, some will say that this is a delusion and that it is vital to maintaining an intelligent outlook on life that one stay with what is observable, verifiable, and controllable as a way both of explaining and living life. All of which is fine, as my point here is not to enter into non-hacker-related topics. But, that said, when I read statements like those quoted above from this article, I can't help but think that statements such as these reflect what is merely the scientific equivalent of needing to believe in something that is bigger than ourselves as part of retaining hope in this life - this taking the form, in Singularity thinking, of something akin to attaining human perfectability via an exponentially expanding knowledge base that presumably will be applied by humanity toward good and not toward evil. Obviously, this view is grounded in the science of what computers have done and potentially can do in the future but the final step in the analysis - that immortality will be achieved and that (it would seem) all major human problems will be solved through this superior intelligence - strikes me as being more about faith than about science.
I don't think the part about immortality and the solution to all human problems has anything to do with faith or science. It's about motivation. These are things humans want. The question then is how best to achieve a future in which those goals are fulfilled. Transhumanists think about how to reach those goals via technology - by the technical manipulation of the strictly material world. Sure, you can say that it takes some amount of faith to believe that these goals are achievable via technology before it has happened, but I think it's closer to having a vision (in the same sense that Apple had a vision for what tablet computing could be like). But that is very, very different from believing that one actually attains the goal by having faith.
> I am a theist because I believe that [we] derive ultimate meaning and hope from believing in something that is bigger than [us].
Wait a minute, you admit that you believe in something because it feels good? It sounds like you want to believe in God, but actually don't really. I'd like to test that, so please forgive the following troll.
God doesn't exist, and those who believe it does are wrong (yes, my belief is that strong).
Now, is your belief so strong that you feel the urge to respond something like "no you're wrong, God does exist"? I don't ask for evidence (the internet has plenty), just a yes or a no, followed by your estimated probability that God exists if you wish.
It seems just an incorrect reappropriation of “theism”, really.
I could make a similar argument, though without a religious undertone, about a belief in a greater-than-whole (or struggle for one, if that sounds better).
I agree. This is a dangerous sort of argument, to believe something - particularly something so core to human behavior - simply because it is convenient and satisfying.
In fact, this type of justification is at the root of many of the terrible things that have happened in the world - from slavery to genocide.
Are you a terrible person? Maybe not, but only because it doesn't feel right.
I am a theist because I believe that human beings - flawed, fallible, finite, and mortal - derive ultimate meaning and hope from believing in something that is bigger than themselves. Now, some will say that this is a delusion and that it is vital to maintaining an intelligent outlook on life that one stay with what is observable, verifiable, and controllable as a way both of explaining and living life. All of which is fine, as my point here is not to enter into non-hacker-related topics. But, that said, when I read statements like those quoted above from this article, I can't help but think that statements such as these reflect what is merely the scientific equivalent of needing to believe in something that is bigger than ourselves as part of retaining hope in this life - this taking the form, in Singularity thinking, of something akin to attaining human perfectability via an exponentially expanding knowledge base that presumably will be applied by humanity toward good and not toward evil. Obviously, this view is grounded in the science of what computers have done and potentially can do in the future but the final step in the analysis - that immortality will be achieved and that (it would seem) all major human problems will be solved through this superior intelligence - strikes me as being more about faith than about science.