100 is a nice, round and even number. It's also a marketing ploy. Truth is none of these guys can make an accurate prediction about what AI can do in 20 years.
I don't know the researchers in question but Turing's paper [0] was surprisingly predictive in 1936, decades before any of this came to light. It's not out of the question that they could set the direction of research for a decade or two in the future.
I agree, 100 as a number is a marketing ploy. But I like the idea of a sustainable long term mission, rather than the funding rat race and trend chasing you tend to see.
> I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 10^9 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.
He was mostly right, out of only some 25 years, 50% more. A really great estimate for something that changed so fast. But just next:
> Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
That was quite off the mark.
Anyway, Turing's prediction was for 50 years in the future, that's orders of magnitude easier than 100 years in the future. And nearly all of the predictions by that time were completely wrong, what makes you think those people are the ones of our time that'll get their predictions right?