Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll also explain my downvote, it's because of the assertion that LLMs "don't actually understand anything", which to anyone who's actually successfully used LLMs to solve a difficult problem is clearly false, unless you use some contrived definition of the word "understand" that doesn't match how the word is actually used in normal conversation.


I am referring specifically to what was claimed in the video I quoted, so, unless you watched the actual video, what you are saying has not much bearing on what I am actually saying. Sorry to say it, and not to be harsh, but I constructed my comment to specifically point to such an instance via bracket quotes, please respond to what was said in said video.


Asking people to watch a 30m video in order to understand your comment isn't reasonable - can you summarize the point from the video that you're arguing here so people can respond to it without putting in all of that extra work?


Well, I did; LLMs are not necessarily intelligent enough to not cause new problems in terms of the solutions they produce. This is a fundamental flaw of LLMs that is covered even by mainstream media, much less the AI Effect as shown by Wikipedia. At worst, they might turn a 0.1x engineer into a 10x one, ie a 1x one, except with no ability to actually solve problems cohesively.


I'm an experienced engineer and I've seen what I estimate to be a 2-5x productivity improvement in the time I spend typing code into my computer from embracing LLM-assisted development.

Typing-in-code is only 10% of the work that I do, but this is still a very meaningful improvement for me.

I've written a bunch more about my own experiences here: https://simonwillison.net/series/using-llms/ and here: https://simonwillison.net/tags/ai-assisted-programming/


The bigger ones have gained a rough understanding of a few systems [1]. Which is really impressive and gives an answer to the Chinese Room experiment. In my experience they don’t understand a lot of things I ask about very well. But the fact that they understand anything at all is impressive.

1. https://danangell.com/blog/posts/gpt-understands/


If five years ago someone said that in half a decade we'd have a computer program that could solve medium-complexity Leetcode problems that it had never seen before, hardly anyone would believe them. Now we have programs that can do exactly this, and yet some people never miss a chance to try to trivialise what just a few years ago would have been considered an amazing, world-changing achievement.


Can it though? My understanding is that ChatGPT has all the Leetcode problems memorized, maybe it can extrapolate to substantially similar ones in its training set.

I tried it for advent of code 2023, and it was pretty helpless.


As do most people.


I don't think that's true - I think decent programmers can figure out mediums if they apply themselves.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: