Hacker Newsnew | past | comments | ask | show | jobs | submit | arcxi's commentslogin

> But in real life, Anarchists will still argue that a Benevolent-Dictator-For-Life governance approach is wrong, even if it applies to digital artifacts that have zero marginal cost.

no they won't, FOSS project's governance model has no relevance to anarchist discussion. anarchists are against coercive authority, not leadership in general, and FOSS does operate under anarchist principles, which is why anarchist community is a strict subset of FOSS community.


Anarchist developers have argued that, a lot, on public FOSS mailing lists.

I strongly suspect if you asked ChatGPT to pretend to be an anarchist FOSS developer, it would argue about your FOSS governance model because that's the data it was trained on.


there's nothing wise about hoarding

it's heavy lifestyle restrictions that lead to anti-social behavior in the first place. by far the most common crime is property crime, people usually commit it out of desperation and lack of opportunity. the degree of personal freedom in a capitalist state is defined by wealth, which creates a natural incentive to steal. then when they do, those people are put in prison, where they connect with other labeled criminals, all of whom face significantly lower chances of being hired, making sure that doing anything else in their life except crime will be as difficult as possible. aren't those heavy lifestyle restrictions enforced on people by government?

can you share any examples of these "new and better ways to use them"? because the only way I've used LLM and seen other people use it is to literally just talk to it, which doesn't require any skills beyond basic conversational abilities.

I'm taking about coding agents, not chatbots.

With coding agents you need to think very carefully about how you design the agentic loop such that the agent has the right tools and information available to it to compete the goal.

I've been writing a lot more about that here: https://simonwillison.net/guides/agentic-engineering-pattern...


Satoshi may also be unable to cash out simply because they are dead.


it's an exaggeration for sure but I don't think it's a stretch to believe Anthropic spends considerably more effort on data scraping & curation than anything else


To me the image of a world where everyone does menial work while entertaining themselves with AI-generated "art" doesn't seem fun, it seems extremely depressing and dystopian. I guess we just have different values.


Anecdotally I'm programming for non-English business domains in Go and Python and I've literally never seen anyone use native alphabet in identifiers - it's always either poor translations or transliterations.


it is weird, especially for Go with its semantic naming and famously opinionated compiler. it will gladly build code with a variable named 𖤐界ᥱᥲΣ੭, but God forbid it's unused.


This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.

In contrast, what harm do those detractors cause? They don't generate as much code per hour?


By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!

The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)


that’s a stretch and taking a measured approach to change is valid


> detractors slow the pace of progress

Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.


That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.

Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: