Before the first Nuke (or the first H-bomb, I don't remember) was fired, there was a study about the risk of burning up the entire atmosphere. See, at the time, they were quite confident the Nuke would just be a huge bomb, but there was this little uncertainty they needed to sort out. In the end, they concluded that firing the Nuke would not burn up the whole atmosphere. It didn't.
The lesson is this: we had only one try. If nukes did cause the atmosphere to burn up in a giant blaze, we would all be dead by now. If you do something, anything, you better make sure it won't kill us all.
Horseless carriages? Sure, these might kill a few people, here and then[1], but we're pretty sure they won't kill us all in one blow.
Intelligence on the other hand is way more dangerous. Human intelligence designed Nukes in the first place remember? AI can do way worse. Even if we model it after the human brain, if it's smart enough to do the same as we did, then it will be able to model another such AI, only slightly better, and so on until it takes over the world. "Taking over the world" may sound enormous, but it really isn't. Imagine for a minute a small group of cavemen vs an army of chimps. Well, if you give the cavemen a chance to prepare, the chimps are toast: the cavemen have spears, fire, better communication… Now imagine an AI imagine the AI is smarter than us by the same margin we're smarter than chimps. Same thing: if it's not safe, we're toast.
The lesson is this: we had only one try. If nukes did cause the atmosphere to burn up in a giant blaze, we would all be dead by now. If you do something, anything, you better make sure it won't kill us all.
Horseless carriages? Sure, these might kill a few people, here and then[1], but we're pretty sure they won't kill us all in one blow.
Intelligence on the other hand is way more dangerous. Human intelligence designed Nukes in the first place remember? AI can do way worse. Even if we model it after the human brain, if it's smart enough to do the same as we did, then it will be able to model another such AI, only slightly better, and so on until it takes over the world. "Taking over the world" may sound enormous, but it really isn't. Imagine for a minute a small group of cavemen vs an army of chimps. Well, if you give the cavemen a chance to prepare, the chimps are toast: the cavemen have spears, fire, better communication… Now imagine an AI imagine the AI is smarter than us by the same margin we're smarter than chimps. Same thing: if it's not safe, we're toast.
[1]: http://www.statisticbrain.com/car-crash-fatality-statistics-...