I was about to write a reply claiming that you're different from autocomplete because you take input from more sources than just the words you've said before (e.g. your vision), but actually I can't see how that's much different from a language model. The approach seems the same, and all that's really different is the shape of the input data.
But this uncovers difficult questions about free will. If we're all just autocompleting based on a combination of the world around us, our internal state, and the physical laws, then what even is intelligence anyway? This view reduces thought to nothing more than an interesting dust storm.
Still, I find the original argument compelling, if not logically convincing. There does seem to be something missing from GPT-3 that differs fundamentally from human intelligence or AGI. But maybe that's an illusion.
Edit: I don't think you should have been downvoted, since your question is valid and constructive in my view.
That's pretty much it. I do belive it's possible to actually develop "will", but almost nobody thinks that they need to work on such things. They confuse being a programmed robot with being a programmer.
But this uncovers difficult questions about free will. If we're all just autocompleting based on a combination of the world around us, our internal state, and the physical laws, then what even is intelligence anyway? This view reduces thought to nothing more than an interesting dust storm.
Still, I find the original argument compelling, if not logically convincing. There does seem to be something missing from GPT-3 that differs fundamentally from human intelligence or AGI. But maybe that's an illusion.
Edit: I don't think you should have been downvoted, since your question is valid and constructive in my view.