I think it depends on both the complexity and the quality bars set by the engineer.
From my observations, generally AI-generated code is average quality.
Even with average quality it can save you a lot of time on some narrowly specialized tasks that would otherwise take you a lot of research and understanding. For example, you can code some deep DSP thingie (say audio) without understanding much what it does and how.
For simpler things like backend or frontend code that doesn't require any special knowledge other than basic backend or frontend - this is where the bars of quality come into play. Some people will be more than happy with AI generated code, others won't be, depending on their experience, also requirements (speed of shipping vs. quality, which almost always resolves to speed) etc.
I recently had success with a problem I was having by basically doing the following:
- Write a correct, pretty implementation
- Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)
- Pull across the concepts and timesaves from Claude's mess into the pretty code.
Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.
However, a correct, pretty and fast solution may exist that neither of you have found yet.
But yes, the scope and breadth of their knowledge goes far beyond what a human brain can handle. How many relevant facts can you hold in your mind when solving a problem? 5? 12? An LLM can take thousands of relevant facts into account at the same time, and that's their superhuman ability.
> But what makes a human mind more "understanding"?
If you view understanding as knowledge + the ability to apply it, everything falls into its place. The Chinese room can't apply the knowledge that it has, even theoretically.
The immobile Chinese room was once a mobile Chinese room.
You're refutation isn't there. There isn't anything that distinguishes a Chinese room from a Chinese person. You submit Chinese text and get Chinese answers.
You say understanding only exists in humans. That's special pleading not an explanation.
Ask your Chinese room what left and right means. Also: near and far, heavy and light, hot and cold. When it comes up with definitions through other concepts, ask what those things mean, etc.
Excellent post. I share the author's sentiment which is essentially "to hell with Figma, at least fix Sketch". Been feeling very lonely in may hatred towards Figma, which is for a whole bunch of reasons (among others, it's an incredibly shitty, memory and CPU hungry Electron app that looks and feels worse than any more or less well designed web site), but now after reading this I realize the number of reasons has doubled.
It may look like a crappy Electron app, but Figma has a quite interesting architecture. The browser editor is developed in C++ and cross-compiled to JavaScript with emscripten. The rendering engine looks like its handling HTML, but it's actually rendering their own document format for cross-browser consistency. They have their own CRDT implementation to handle multi-user edits.
I think my biggest question is who cares? What does having an interesting internal architecture have to do with the “its electron though” ideological attack.
It is made to perform much better than your typical electron app would. Saying electron-based == shitty is complete misunderstanding of the technology. Although i dislike Figma as much as the next guy, their app was in many ways very impressive. See Figma's cofounder old articles at https://madebyevan.com/figma/
(author of the post here) I cut a paragraph how Figma costs cuckoo bananas money for your entire team for the privilege of enduring this byzantine nightmare. And they paywall certain features, which you likely can't get authorization for, so you have to do more hacks on top of hacks on top of the “gold standard” practices I shared in the blog post. The price ramp is not gradual.
man, I dont even use Figma for personal & side projects because its so expensive. I still occasionally fire up sketch or freehand it.
Figma is a work tool only and I'm disappointed by its MCP tooling which feels late and behind where it should be, I just feel forced to use Figma Make which stays in their walled garden without practical utility and connections to my actual codebases
> Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob
The solution is relatively simple though - not sure the article suggests this as I only skimmed through:
Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.
What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.
It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?
I thought that was kind of how the hard sciences work already?
My grad school friend who was a physicist would write his talk just before his conferences, and then submit the paper later. My experience in CS was totally backwards from that.
15 years ago I was thinking about switching my career to a different industry altogether, just didn't know what it would be. One thing I knew was that I was so tired of building web sites and backends. Boring, repetitive, uninspiring.
Then a friend asked me to write a simple iPhone app. I had no idea what development for Apple platforms would be like...
Fast forward to 2026, I'm 57 now, still in tech, building apps for Apple platforms, still enjoying it very much.
The durability of their products still surprises me. I still own and use iPhone 11 (still it is my first iPhone when I switched from Android). Still getting latest iOS updates and functioning very well and may last for 2 more years. What other phone could do this?
I’ve had the exact opposite journey. Native apps, disillusioned and frustrated with the backwards tooling, moved on to more open platforms (web apps and backends)
I’m curious what you find “backwards” about native tooling. I know the sentiment is common, and there must be some truth to it. But my partner works in web infra and frequently laments her inability to trace a single request through her company’s monolith while trying to reconstruct a failure from logs, and I am baffled that there’s no equivalent to attaching a debugger and stepping through execution.
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
Well they managed the "you're wrong" bit at least. Sometimes ChatGPT tells me I'm wrong when I'm not. Still can't do "I don't know" which is probably the bigger problem.
Claude models have made very good progress (see BS benchmark), and that probably explains why they're leading now. others will follow this precedent shortly, no doubt.
While Swift now has the `borrowing` and `consuming` keywords, support for storing references is nonexistent, and the only way to return/store `Span`s, etc, is only possible through using experimental `@lifetime` annotations.
Swift is a nice language, and it's new support for the bare necessity of affine types is a good step forward, but it's not at all comparable with Rust.
You don't even need to reinvent a walkable city, just look at any medieval historical town that is say ~500 years old, almost untouched, and has restricted traffic today (possibly with no public transport whatsoever). These towns are a pure joy to live in, they are walkable with no other options, quiet, pleasant and overall healthy to live in in all respects.
We keep rediscovering that we're happier and more fulfilled when we live in ways that are more like how we've been living for most of the last million years. Also we are disgusted by our ancestors and look down upon them.
From my observations, generally AI-generated code is average quality.
Even with average quality it can save you a lot of time on some narrowly specialized tasks that would otherwise take you a lot of research and understanding. For example, you can code some deep DSP thingie (say audio) without understanding much what it does and how.
For simpler things like backend or frontend code that doesn't require any special knowledge other than basic backend or frontend - this is where the bars of quality come into play. Some people will be more than happy with AI generated code, others won't be, depending on their experience, also requirements (speed of shipping vs. quality, which almost always resolves to speed) etc.