Hacker Newsnew | past | comments | ask | show | jobs | submit | aerhardt's commentslogin

> "information technology" generally didn't increase productivity

Do you think it'd be viable to run most businesses on pen and paper? I'll give you email and being able to consume informational websites - rest is pen and paper.


Productivity metrics were better when businesses were run on just pen and paper. Of course, there could be many confounding factors, but there are also many reasons why this could be so. Just a few hypotheses:

- Pen and paper become a limiting factor on bureaucratic BS

- Pen and paper are less distracting

- Pen and paper require more creative output from the user, as opposed to screens which are mostly consumptive

etc etc


> Productivity metrics were better when businesses were run on just pen and paper

What metrics are these?


Productivity growth. If you take rolling averages from this chart, it clearly demonstrate higher productivity growth before the adoption of software. This is a well established fact in econ circles.

https://fred.stlouisfed.org/graph/?g=1V79f


I think this is a classic case of reading into specific arguments too deeply without understanding what they really mean in the grand picture. Few points to easily disprove this argument

- if it were true that software paradoxically reduces productivity, you can just start a competing company that doesn't use software. Obviously this is ridiculous - top 20 companies by market cap are mostly Software based. Every other non IT company is heavily invested in software

- if you might say the problem is it at the country level, it is obvious that every country that has digitised has had higher productivity and GDP growth. Take Italy vs USA for instance.

- if you are saying that the problem is even more global, take the whole world - the GDP per is still pretty high since the IT revolution (and so have other metrics)

If you still think there's something more to it, you are probably deep in some conspiracy rabbit hole


The data clearly shows that productivity growth is flat or even declining. What is your accounting of why software hasn't offset those numbers?

You don't have a counterfactual to suggest that it would have continued increasing had it not been for technology. Is there _any_ credible economist who suggests that we might have higher productivity without tech?

There is no counterfactual needed. Productivity growth has declined, despite the expectation that software would accelerate productivity. I'm asking you why this didn't happen.

There is a counterfactual needed because it is not clear whether the growth would not have declined even more without Software.

Again I'm asking - is there a single credible economist who says that the growth would have been higher without technology?


I'm not even proposing that growth would have been higher without "technology". I said information technology has not increased productivity growth compared to the past. This is an observation of fact.

Is there a way to mute people who are clearly AI boosters? ^

? you are literally commenting on the release of a new model from OpenAI in a tech focused community. Have you considered what should be normal here?

When they decide to touch something as they go, they often don't improve it. Not what I would call "refactoring" but rather a yank of the slot machine's arm.

I'm building a website in Astro and today I've been scaffolding localization. I asked Codex 5.4 x-high to follow the official guidelines for localization and from that perspective the implementation was good. But then it decides to re-write the copy and layout of all pages. They were placeholders, but still?

Codex also has a tendency to apply unwanted styles everywhere.

I see similar tendencies in backend and data work, but I somehow find it easier to control there.

I'm pretty much all in on AI coding, but I still don't know how to give these things large units of work, and I still feel like I have to read everything but throwaway code.


You can steer it though. When I see it going off the reservation I steer it back. I also commit often, just about after every prompt cycle, so I can easily revert and pick up the ball in a fresh context.

But yeah, I saw a suggestion about adding a long-lived agent that would keep track of salient points (so kinda memory) but also monitor current progress by main agent in relation to the "memory" and give the main agent commands when it detects that the current code clashes with previous instructions or commands. Would be interesting to see if it would help.


They also don't understand how exceptions work. They'll try-catch everything, print the error, and continue. If I see a big diff, I know it just added 10 try-catches in random parts of my codebase.

I never use xhigh due to overthinking. I find high nearly always works better.

Purely anecdotal.


This is even the official OpenAI guideline too.

No it is not, but they had a unique positioning around open-source and the parent commenter means that they are losing it.

Again, a trait they share with companies in other countries. It's the obvious business model: get known by releasing impressive open models, then pivot to closed for even more impressive models.

That's going to be the path for every new company from every country, I assume. They are not releasing open models out of the goodness of their hearts. They are for-profit companies, they don't have hearts, they just have balance sheets.


I've tried reading this and I can't. It's not that the text is AI generated, it's that the whole structure seems to be. (Hope you appreciate the irony of my LLMisms). It's not human-parseable, at least not by this human. And it's not that my attention is shot, luckily I'm still able to read copious amounts of long-form text and analysis.

Also, opening with "I'm a top performer"... That's not how writing for other humans works. It's perfectly legitimate to establish authority in the opening a piece, but you have to show some credible proof. "I'm a top performer" is immediately off-putting.


Thank you for your feedback. These are fair points.

I get that "top performer" is off-putting. You're right that authority has to be earned in the text (and I hope I do that), not declared.

On the structure: yes, it's a novel format and I can see how that would be hard to parse. It won't work for everyone.

Both of these are artifacts of trying to blend research into the modern social-media driven world.


> Become an expert in 1 thing

I endorse this. I've been doing generalist consulting for about six years, and I love flying solo. I've been successful in landing some big customers and interesting projects, but I'm tired of the inefficiency that comes with being a generalist, so I've decided to specialize vertically.

I had a super-interesting project in executive search in the last couple years, and I've decided to settle around that area: executive search and recruitment firms. Maybe later, as an extension I'll target other B2B, relationship-driven professional service firms tha share a common core of processes.

I've only recently pivoted but I'm already starting to see the fruits. It's commercially efficient. Many potential customers seem happy to open the door and chat. I know where to find them, online and off. And then it's operationally efficient. I'm confident I could jump on a customer project and recognize most of their processes and systems immediately and have a quick impact. I already have a base of IP (documented business procedures, code, etc.) and only intend to grow it in the coming years and even turn it into a "productized service".

I think people refuse to specialize for three main reasons. The first is for lack of a clear thesis. That's fine, you need to explore for a bit. The second is for a fear of lack of opportunities, which is often unfounded. The third is due to psychological reasons related to the image of self. On this last one I can only advise that (a) even in specialization there is way more variety than you think, (b) you can always keep growing as a generalist with side projects and self-directed learning and (c) nothing is ever fixed in stone, everything is in flow - you can always pivot out into other interesting directions.


I used to fear specialization because of a form of commercial or career FOMO. The reality is you instead get spread to thin and are (ironically?) now at risk of being displaced by "good enough" AI solutions. If you are a generalist you still need to be "T-shaped" with a few areas of deeper expertise. Funny enough your expertise could be getting things done-done using all your generalist abilities (ex: able to take initial ideas all the way to a active, viable business).

How? This is what I never understood. Every domain expert I’ve ever knowing is because they’ve already I can spend all the time I have reading and toying around in a subject, but until I have real concrete experience to guide me, it’s usually pretty difficult to become an “expert” in anything. I know how to become an advanced hobbyist, but thats never in my life translated to someone being willing to pay me over say, and already established expert

I've drifted across projects in different industries (FMCG, investment funds, ad agencies, startups of various sorts) and like I said I had a long project (over two years) for an executive search firm and got to see the ins and outs of how everything works from strategy to technology. I could be drifting to find clients in yet another vertical but I've decided to stay put for at least a few years. So to answer your question, in my particular case: I drifted, stumbled upon something by chance, and then took a conscious decision to stay.

If you're a dev, one approach to specialization is to align with the tooling associated with common "profit center" processes. Become a Salesforce/Hubspot/Odoo/Shopify developer. If you're not interested in developing, you can specialize in learning one specific ecosystem really well and then teach companies -- typically SMBs -- how to set themselves up and organize their operations around it.

This seems all good and well 10 years ago, but how well does this survive when the actual SMEs can just use LLMs to achieve the same effect? Those are the sort of platforms going all in on that stuff.

This can help and hurt. E.g. if you run a very successful Shopify plugin, you risk Shopify implementing it natively and wiping you out in one fell swoop.

common "profit center" processes

how do i find what those are?

i see the point, but i don't find developing for one specific tool very appealing.


how?

From GP comment: “either start an open source project, or become the main collaborator in one.”


"Executive search and recruitment firms" is an industry/segment, though, right? I thought this comment was more about specializing in some particular niche tech thing wrt the "just start a open source project guysss" comment

Specialization works by vertical or by function. Or you can actually mix them if your TAM remains large enough.

Yes, the parent was referring to technical specialization. But my point is either works. Especially in the context of what OP is trying to do which is "automation" - technically very broad.


> you just limit the space to text

And even then... why can't they write a novel? Or lowering the bar, let's say a novella like Death in Venice, Candide, The Metamorphosis, Breakfast at Tiffany's...?

Every book's in the training corpus...

Is it just a matter of someone not having spent a hundred grand in tokens to do it?


I know someone spending basically every day writing personal fan fiction stories using every model you can find. She doesn't want to share it, and does complain about it a lot, seems like maintaining consistency for something say 100 pages long is difficult

I don’t understand - there are hundreds/thousands of AI written books available now.

I've glossed over a few and one can immediately tell they don't meet the average writing level you'd see in a local workshop for writers, and much less that of Mann or Capote.

Never mind novels, it can't even write a good Reddit-style or HN-style comment. agentalcove.ai has an archive of AI models chatting to one another in "forum" style and even though it's a good show of the models' overall knowledge the AIisms are quite glaring.

They definitely can, and do.

It's just that the ones that manage to suppress all the AI writing "tells" go unnoticed as AI. This is a type of survivorship bias, though I feel there must be a better term for it that eludes me.


Who says they can't? What's your bar that needs to be passed in order for "written a novella" to be achieved?

There's a lot of bad writing out there, I can't imagine nobody has used an LLM to write a bad novella.


> What's your bar that needs to be passed

I provide four examples in my comment...


Your qualification for if an LLM can write a novella is it has to be as good as The Metamorphosis?

Yes, those are examples of novellas, surely you believe an LLM could write a bad novella? I'm not sure what your point is. Either you think it can't string the words together in that length or your standard is it can't write a foundational piece of literature that stays relevant for generations... I'm not sure which.


I don't think it can write something that's of a fraction of the quality of Kafka.

But GP's argument ("limit the space to text") could be taken to imply - and it seems to be a common implication these days - that LLMs have mastered the text medium, or that they will very soon.

> it can't write a foundational piece of literature

Why not, if this a pure textual medium, the corpus includes all the great stories ever written, and possibly many writing workshops and great literature courses?


I don't know what to tell you. It's more than a little absurd to make the qualification of being able to do something to be that the output has to be considered a great work of art for generations.

I agree that the argument starts from a reduction to the absurd.

So at least we can agree that AI hasn't mastered the text medium, without further qualification?

And what about my argument, further qualified, which is that I don't think it could even write as well as a good professional writer - not necessarily a generational one?


>AI hasn't mastered the text medium

I don't know what this means and I don't know what would qualify it as having "mastered" at all. Seems like a no-true-Scotsman thing where regardless there would always be someone that it couldn't actually do a thing because this and that.

>why can't they write a novel?

This is what I'm disagreeing with. I think an LLM can write a novel well enough that it's recognizably a pretty mediocre novel, no worse than the median written human novel which to be fair is pretty bad. You seem to have an unqualified bar something needs to pass before "writing a novel" is accomplished but it's not clear what that is. At the same time you're switching between the ability to do a thing and the ability to do a thing in a way that's honored as the best of the best for a century. So I don't know it kind of seems like you just don't like AI and have a different standard for it that adjusts so that it fails. This doesn't match what you'd consider some random Bob's ability to do a thing.


I don't dislike AI, I use it every day for coding and increasingly for non-technical tasks, and have also used it in enterprise workloads to great success. I am fairly optimistic about it - I think it will remove a lot of drudgery and make things economical which previously weren't.

I am just challenging the notions that "if you limit it to text, it's doing really well" or that the text contains in itself all the information that is needed to carry out a task to a certain level of quality. This applies in my experience not only to writing literature but also to certain human tasks which may appear mundane and easy to automate.


If the end result is most books will be written by AI you need the possiblity of that qualification. If its only capable of certain types of book then we will need endless amounts of that.

No - that’s my secret.

> It’s really hard

How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?


You're probably significantly underselling the value of your own "good old software dev practices."

I believe the point (which you seem to tacitly agree with) is that a young dev's time is much better spent reading and writing code "the old-fashioned way" vs chasing the new SOTA in AI-assisted development. A competent dev can basically master agentic development in a few months. But it takes years to become competent.

Oh yea, I agree that building good software remains roughly as challenging as ever.

I was asking if there was something about the “agentic” part in particular that was difficult.


I’ve read and heard from Semi Analysis and other best-in-class analysts that the amount of software optimizations possible up and down the stack is staggering…

How do you explain that capabilities being equal, the cost per token is going down dramatically?


Optimizations, like I said. They'll never hack away the massive memory requirements however, or the pre training... Imagine the memory requirements without the pre training step....this is just part and parcel of the transformer architecture.

And a lot of these improvements are really just classic automation or chaining together yet more transformer architectures, to fix issues the transformer architecture creates in the first place (hallucinations, limited context)

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: