If there's enough of the "essence of reason" represented in human text, then it's possible in theory some sufficiently large LLM could eventually could grok it and fully generalize over rational thought (i.e, AGI).
Personally I'm extremely skeptical. There's quite a lot of machinery in the brain other than language, and conscious rationality also runs in a constant loop, not a single forward pass through the weights.
I can't disprove the possibility. But if I were a betting man, I'd go all in on LLMs improvements adopting a sigmoid curve rather than an exponential one.