Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think code was always expensive. If it seemed cheap, the cost was hidden somewhere else.

When I started coding professionally, I joined a team of only interns in a startup, hacking together a SaaS platform that had relative financial success. While we were very cheap, being paid below minimum wage, we had outages, data corruption, db wipes, server terminations, unresolved conflicts making their way to production and killing features, tons of tech debt and even more makeshift code we weren't aware of...

So yeah, while writing code was cheap, the result had a latent cost that would only show itself on occasion.

So code was always expensive, the challenge was to be aware of how expensive sooner rather than later.

The thing with coding agents is that it seems now that you can eat your cake and have it too. We are all still adapting, but results indicate that given the right prompts and processes harnessing LLMs quality code can be had in the cheap.

 help



> The thing with coding agents is that it seems now that you can eat your cake and have it too. We are all still adapting, but results indicate that given the right prompts and processes harnessing LLMs quality code can be had in the cheap.

It's cheaper but not cheap

If you're building a variation of a CRUD web app, or aggregating data from some data source(s) into a chart or table, you're right. It's like magic. I never thought this type of work was particularly hard or expensive though.

I'm using frontier models and I've found if you're working on something that hasn't been done by 100,000 developers before you and published to stackoverflow and/or open source, the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.


> It's cheaper but not cheap

It's quite cheap if you consider developer time. But it's only as cheap as you can effectively drive the model, otherwise you are just wasting tokens on garbage code.

> LLM becomes a helpful tool but requires a ton of guidance

I think this is always going to be the case. You are driving the agent like you drive a bike, it'll get you there but you need to be mindful of the clueless kid crossing your path.

For some projects I had good results just letting the agent loose. For others I'd have to make the tasks more specific and granular before offloading to the LLM. I see nothing wrong with it.


> I never thought this type of work was particularly hard or expensive though.

Maybe not intrinsically hard, but hard because it's so boring you can't concentrate.

> the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.

ISTR some have had success by taking responsibility for the tests and only having the LLM work on the main code. But since I only seem to recall it, that was probably a while ago, so who knows if it's still valid.


So code was apparently cheap, but in fact it was expensive because it was low quality.

Now with LLMs, code is cheap and it also has quality, therefore "quality code can be had in the cheap".

Do you really believe this is the case? Why don't companies fire all their developers if they can have an algorithm that can output cheap and quality code?


Because cheap and quality code is only part of the story. The code needs to solve the right problem and that is a domain only a human can operate, at least for now. Back then when I was inexperienced I couldn't write good code, but I could sit with the company's CTO while he explained the domain, the challenges and the goal of the project. I could talk with domain experts and understand what the common solutions to the problems were. These are things that for an LLM to do would require untold amounts of context or a specialized model that understands the domain.

But the thing is, there are many unknowns. We humans are very capable of adapting as we go. LLMs have a fixed data they were trained on and prompt engineering can only get you so far.

I think anyone asking this with the intention of actually replacing humans with LLMs don't really understand neither humans nor LLMs. They are just talking money.


We didn’t fire all our developers when we invented compilers either, and for much the same reason we didn’t stop hiring laborers when we first built ships and established overseas trade routes: business will always expand to meet its reach

Many enterprises are currently exploring to see if they can invite developers to leverage AI tools—like they leveraged the compiler—to be more productive. To operate on a higher plane of agency, collaborating on what we should be building and not just technical execution. Those actively hostile or just checked out with the idea of relearning skills are being laid off. (Some unprofitable business sections are being swept up opportunistically too.) The idea that all developers would be fired if AI tools can write good code doesn’t meet the lessons of history


> Many enterprises are currently exploring to see if they can invite developers to leverage AI tools—like they leveraged the compiler—to be more productive. To operate on a higher plane of agency, collaborating on what we should be building and not just technical execution.

The thing is, developers have been hired to automate process, and as for any professional doing a good job, that means the output should perform reliably. But now they are forcing us to use a tools that everyone knows is not reliable, but the onus is still on us to keep the same reliability. So do you see why we are not thrilled?

It’s like providing a faulty piano (that shuffles the notes when a key is pressed) and expecting a good rendition of the Moonlight Sonata.

Or a crane that will stall and drop its load randomly. It would have been sent to the scrapyard on the first day.


> "Or a crane that will stall and drop its load randomly. It would have been sent to the scrapyard on the first day."

The only reason you have the concept that engines can "stall" is because people have bought engines that can stall by the hundreds of millions, instead of the earliest people refusing to buy them at all and all waiting for the perfect engine.

Container ships can sink with all the containers lost at sea. Still used.

Steam train engines could explode, derailing the train and killing some passengers and employees. Still used.

Buildings can collapse. Still used.

Pneumatic tyres can burst. Still used.

Here[1] is Tom Scott using a recreation walking crane from the 13th century, a technology going back to Roman times, which has no evidence that it ever had brakes on it historically. Look at that and tell me you think the rope never snappped, the wood never broke, the walker never tripped and the thing never unreeled the load back to the ground with the walker severely injured, because if it went wrong builders would refuse to use it? No chance.

Nothing functions like you're claiming; that's where we get the saying "don't let perfect be the enemy of good enough", as soon as stuff is better than not having it, people want to make use of it.

[1] https://www.youtube.com/watch?v=pk9v3m7Slv8


You forgot to address the random aspect of the failure cases.

Real world is chaotic, technology was always first about controlling, then improving said control. A lot of the risks in the situations you described have been brought down that the savings (time, money,…) are magnitude more than the cost of the failure.

I’m not asking for perfection, but something good enough that we can demonstrate the savings outweigh the costs. So far there’s none. In fact, we are increasing it. And fast.


> But now they are forcing us to use a tools that everyone knows is not reliable, but the onus is still on us to keep the same reliability. So do you see why we are not thrilled?

Why generalizing your own experience on other's?


I don't know if you've heard, but there have been a large number of layoffs in the tech sector recently. Whether they're actually related to AI as executives claim, and not section 174 of the US IRS tax code in the BBB, is known only to them, but if your argument hinges on people having not been fired when there have been layoffs, you may need a different one.

I think a major contributor to the layoffs is companies hiring to much people around covid[1]. I cant find good stats for the years 2019-2026 besides looking at now and the past directly. There are some data for the ukranin side djinni[1][2] and for US IT job postings[3].

I dont think AI is the reason for the layoffs. Its just easier to say "because of AI we are firing" than to say "because we overhired and its actually our fault".

[1]https://djinni.substack.com/p/2021-in-review [2]https://blog.djinni.co/post/q1-analytics-en [3]https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE


As you said, it's impossible to determine how many of the current layoffs are caused by AI, they probably also have a lot to do with the broader economic downturn. But you’re still missing the point, if companies truly have a black box that can produce cheap, high‑quality code as the GP put it, why don't they just fire 95% of their developers and keep only a small core of AI orchestrators?


Who's missing who's point? You're asking why haven't they fired 95% of their people. I'm pointing at tech sector layoffs saying people are being laid off. It's not 95% which is a number you totally made up, but in the broader picture, I wouldn't say it isn't happening.

This what I really wonder, what is even the cost of code? Or what is real code quality.

I know that things like “clean code” exists but I always felt that actual code quality only shows when you try adding or changing existing code. Not by looking at it.

And the ability to judge code quality on a system scale is something I don’t think LLMs can do. But they may support developers in their judgment.


I don't know why people think SWEs are aesthetic snobs when we talk about "clean code"--the point of code is not to be pretty, it's to be understandable and predictable.

Quality doesn't matter if you're writing throwaway code or you need your startup to find a market before you run out of cash.

But once it matters, it matters a lot.


> Why don't companies fire all their developers if they can have an algorithm that can output cheap and quality code?

Because it takes an experienced developer to get the machine to output cheap and quality code well enough to be useful.

That developer is just a whole lot more valuable now, because they can do more work at a higher quality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: