This is a fair argument but it’s rapidly becoming a non-argument.
LLMs have come a long way since ChatGPT 4.
The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.
All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.
> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...
For all we know, Claude today is the best it will ever be.
The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.
If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..
LLMs have come a long way since ChatGPT 4.
The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.
All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.