> Basically, transformer models are the best for NLP.
Yes, this year. While transformers certainly present a breakthrough in the NLP community and certainly stir up the state-of-the-art again, I don't really see how you go from that to the "computers will understand us" conclusion to be honest. People said that during the word2vec stir up and what we got out of that was incremental results (which is not bad, it's in the nature of things really).
We can already build models that do everything you describe as single tasks, while that's exciting, it's not going to lead to the singularity. We've got a long way to go in terms of understanding models, making them computationally tractable, and making them do what we want in the first place without resorting to hoping that our unsupervised model learns something useful. It's likely that the attention mechanisms we see today will be a large part of that but I'm honestly a bit baffled at the "People are vastly underestimating the changes that are about to come from NLP." part. They're not, people already think that today's AI is magic, I don't think it is benefitial to reinforce that. Speech is nuanced, we're making good progress in many areas but we're not on the cusp of any revolutionary change in computational understanding really.
Yes, this year. While transformers certainly present a breakthrough in the NLP community and certainly stir up the state-of-the-art again, I don't really see how you go from that to the "computers will understand us" conclusion to be honest. People said that during the word2vec stir up and what we got out of that was incremental results (which is not bad, it's in the nature of things really).
We can already build models that do everything you describe as single tasks, while that's exciting, it's not going to lead to the singularity. We've got a long way to go in terms of understanding models, making them computationally tractable, and making them do what we want in the first place without resorting to hoping that our unsupervised model learns something useful. It's likely that the attention mechanisms we see today will be a large part of that but I'm honestly a bit baffled at the "People are vastly underestimating the changes that are about to come from NLP." part. They're not, people already think that today's AI is magic, I don't think it is benefitial to reinforce that. Speech is nuanced, we're making good progress in many areas but we're not on the cusp of any revolutionary change in computational understanding really.