Yes, I believe it's xAI's position that they were technically in compliance at the time. I don't know that a judge would agree. The new EPA rule is more of a clarification; they do not concede that point.
I am not really sure. I wrote some scripts that aggregated data from several APIs with an LLM and the LLM had the foresight to create a caching layer for the API responses as it properly inferred that I would need the results over and over again as well as using asyncio to accelerate fetch speed. This would have been a v2 or v3 and it one-shotted it perfectly.
Yeah, they are good at applying generic patterns, but often it can be overkill/YAGNI that lead to more maintenance work in places that are fine with a much simpler/straightforward solution. But this is what the engineer can decide and with LLMs they wont be forced to make the trade off because it takes longer to build, but rather whether it is really necessary or not.
When it works, it feels genuinely miraculous. Working in a common problem space, like gluing together APIs, it generally does well. Doing something novel or even a little complicated, it can really lead you astray.
But business people always cared only about thr result. My PM (who speaks like a salesman) only cares about the results. My “head of” same. My ceo same. The only ones who ever cared about the process and quality were us the engineers… if we don’t have that care, well, to hell with everything
That's not true as a simple statement, many business people really do care about quality and process, and you may find you care much more about them than you think.
How often have engineers decried yet another rewrite that some project is doing? Or talked about "over-engineering" something that isn't needed, or that another person in a team has setup a full kubernetes gitops thing that's glorious to them but you just want to scp a go binary and be done with it?
I've seen truly excellent engineers hit this issue, I worked in a team years ago and people disagreed on the approach to take on a new project. So we all made a prototype and presented it, so we could pick a direction. There was a requirement that it be done in ruby since that was the language most of the developers were most fluent in. One of the engineers, remarkably smart, wrote a lisp interpreter in ruby so that technically it'd be "in ruby" but have the benefits of lisp.
He cared about the quality and process in one area. Deeply. However focussing on that would be at the detriment to the rest of the actual product we wanted to ship. If you considered the quality of the product as a whole and the process at the level of the organisation, you'd do something very different.
Now, none of this means all business people are good at this or long term vision or anything, just as it doesn't mean all engineers have a very narrow focus. But I've seen engineers focus on the quality or engineering of some component without looking at what it is you're actually trying to achieve as a business, and so push for a worse overall process and lower "quality" result. It's the same sort of disconnect that leads a lot of engineers to rail against meetings and PMs that slow them down without seeing from the other side that it's often better to build the right thing more slowly than the wrong thing more quickly.
Assuming it is accurate, the logical conclusion is that the race is over. The management can get their $result and fast. Now, whether it is good or bad, is a separate story, and only time will tell whether they will be forced to learn anything. Right now, the expectation is to push for results and management seems to ascribe current set of failures to: people not embracing AI enough.
I think that's a common experience but not universal.
Just about everyone cares about process and quality when things start falling apart. And at least with current technology, it seems like vibe coding your way into a large project will inevitably land you in that spot.
This means different things to different people, lot of people enjoy the process of engineering solutions with LLM agents, build out tailored skilled, custom approaches that make up their own flavour "agentic" workflow. There are also people who find joy in Javascript that other people cannot understand why. And other people again love system languages or even tinkering with assembly etc.
What I wanted to say is that LLM use does not automatically mean people just want to get results faster, there are still nerds enjoying the process of working with these new tools.
The results being a lot better crafted by hand I would agree with, if one removes any notion of a time constraint. Sometimes the comparison point is between the LLM authored software or nothing at all.
The feature seems pretty obviously for Anthropic employees who are using unreleased models internally and do not want to leak any details in public commit messages.
It depends whether anyone was ever actually going to spend that week doing it the "hard" way. Having Claude do it in a few minutes beats doing nothing.
Put another way: I absolutely would have an intern work on a security audit. I would not have an intern replace a professional audit though.
It's otherwise a pretty low stakes use. I'd expect false positives to be pretty obvious to someone maintaining the code.
reply