Arguing over terminology like "AGI" and the verb "to know" is a waste of time. The question is what tools can be built from them and how can people use those tools.
I thought a forum of engineers would be more interested in the practical applications and possible future capabilities of LLMs, than in all these semantic arguments about whether something really is knowledge or really is art or really is perfect
I'm directly responding to a comment discussing the popular perception that we, as a society, are "steps away" from AGI. It sounds like you agree that we aren't anywhere close to AGI. If you want to discuss the potential for LLMs to disrupt the economy there's definitely space for that discussion but that isn't the comment I was making.
Whether we should call what LLMs do “knowing” isn’t really relevant to how far away we are from AGI, what matters is what they can actually do, and they can clearly do at least some things that show what we would call knowledge if a human did it, so I think this is just humans wanting to feel we’re special
>they can clearly do at least some things that show what we would call knowledge if a human did it
Hard disagree. LLMs merely present the illusion of knowledge to the casual observer. A trivial cross examination usually is sufficient to pull back the curtain.