>To me your answer seems like a knee-jerk reaction to the "AI hype" but if you look at how things evolved over the past year
It's not a kneejerk traction, like you said it's been 2 years of nonstop AI hype. I have used every chatbot model from openAI (3.5, 4, 4o, even o1) and a few from other companies as well. I've used code copilot tools. I've yet never not been disappointed.
> there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way
On the contrary, there's NO indication of meaningful progress since the release of GPT 3.5. There's incremental progress, sure, as models get larger and larger and things get tweaked and perfected, but NO breakthrough and NO indication of an imminent one. Everything points to the fact that the current SotA, more or less, is at good as it gets with the transformer model.
> now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.
It's not a kneejerk traction, like you said it's been 2 years of nonstop AI hype. I have used every chatbot model from openAI (3.5, 4, 4o, even o1) and a few from other companies as well. I've used code copilot tools. I've yet never not been disappointed.
> there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way
On the contrary, there's NO indication of meaningful progress since the release of GPT 3.5. There's incremental progress, sure, as models get larger and larger and things get tweaked and perfected, but NO breakthrough and NO indication of an imminent one. Everything points to the fact that the current SotA, more or less, is at good as it gets with the transformer model.
> now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.
Not sure what you mean by this