Hate to bring the 'failure and discredit of institutions' cards, but before all the talk around the diminishing respect and importance of 'big/important' institutions - the school/teacher institution was already failing. And it's not just the kids, parents were the first to start dismissing the teachers/schools - and teaching their kids in the process.
Everything else you mention (empathy, attention, bullying, etc) I'm not sure we are worst off now - let's not idealize the past. Different generations just find different ways to be deranged.
It's the authenticity, but even more than that it's the saturation of inauthenticity. Even if there's oodles of authentic content, if there's enough inauthentic content to drown it out, you enter a vicious cycle where plummeting interactions and new authentic content both deed each other.
I have a hypothesis that network effects kick in for social interaction before they do for monetisation, which is why the advertisers/influencers/propagandists/scammers(/trolls, though this is different) are in a constant state of hunting down and infesting whatever platform good-faith users have most recently fled them too. Part of it is likely that smaller communities are more robust and have an easier time identifying and repelling smaller-scale incursions, but I suspect a big part is that smaller communities simply aren't worth the investment of larger incursions, especially since they'll more easily be ruined before any real payout.
Anyway, I agree with you that "quality" (as in effort and craft) is lower on the list of factors than authenticity, which makes complete sense. There was a time when a well-crafted ad was worthy of note, but ads have been so sneaky and pervasive that I think many people are desperate to have a spontaneous interaction or experience that's not trying to sell them anything.
I never owned the MacBook I used and the current one I do own I still consider selling one day. That's the only reason I'm not ready to replicate this on my own.
The edges are indeed extremely uncomfortable, not to mention how cold it is in winter.
Luckily its just sitting on a stand 99.9% of the time
Maybe we should be glad that afawct none of the people exposed to the risks of artemis ii mission were force on it against their will. I'd bet the even in The Wager you would have have some clear headed people who knew the risk and still chose it
A lot of advancement is multipurpose. CNCs are more accurate than machinists, computers are faster. And we have a lot of the technical knowledge written down.
Machinist never stopped working even after advanced CNCs proliferated. Humans had records of how things were made and yet new generations had to relearn it - and fail in the process.
This mission is not about sending stuff out to deep space. Its about sending out new generation of humans to deep space.
Even if you could guarantee that these new humans have exact same experience of past humans, can we guarantee that past decades simulations or theoretical knowledge acquired - while NOT actually doing something - will effectively reduce the chances of mortality?
On android there are apps that let you see the history - i use NotiStar occasionally to see if i unwittingly dismissed important notifications. And i believe there are apps/settings that help you clear the history from the device.
But this is a reminder that these centralized notification infrastructure (FCM and APNs) store notification content (if the app is told to send content in it - signal with option enabled wouldn't send content) even if we clear local history these middleman still hold it
I bet if you study the rate of "mind changing" over time since phones got smarter we'll see it correlates. As does ability/willingness to commit to anything or anyone.
An example of how european "tech" reacts to threats. 2 european open source projects in litigation with each other and one of them engineered a license to prevent an obvious feature of open source software (forking) while the other is throwing shades at opacity and geopolitical control at the first.
> I was made redundant recently "due to AI" (questionable) and it feels like my works in some way contributed to my redundancy where my works contributed to the profits made by these AI megacorps while I am left a victim.
I think anyone here can understand and even share that feeling. And I agree with your "questionable" - its just the lame HR excuse du jour.
My 2c:
- AI megacorps aren't the only ones gaining, we all are. the leverage you have to build and ship today is higher than it was five years ago.
- It feels like megacorps own the keys right now, but that’s a temporary. In a world of autonomous agents and open-weight models, control is decentralized.inference costs continue to drop, you dont need to be running on megacorp stacks. Millions (billions?) of agents finding and sharing among themselves. How will megacorps stop?
- I see the advent of LLMs like the spread of literacy. Scribes once held a monopoly on the written word, which felt like a "loss" to them when reading/writing became universal. But today, language belongs to everyone. We aren't losing code; we are making the ability to code a universal human "literacy."
> AI megacorps aren't the only ones gaining, we all are.
No, no we are not.
> the leverage you have to build and ship today is higher than it was five years ago.
I don’t want more “leverage to build and ship”, I want to live in a world where people aren’t so disconnected from reality and so lonely they have romantic relationships with a chat window; where they don’t turn off their brains and accept any wrong information because it comes from a machine; where propaganda, mass manipulation, and surveillance aren’t at the ready hands of any two-bit despot; where people aren’t so myopic that they only look at their own belly button and use case for a tool that they are incapable of recognising all the societal harms around them.
> We aren't losing code; we are making the ability to code a universal human "literacy."
No, no we are not. What we are, however, is making ever increasingly bad comparisons.
Literacy implies understanding. To be able to read and write, you need to be able to understand how to do both. LLMs just spit text which you don’t need to understand at all, and increasingly people are not even caring to try to understand it. LLM generated code in the hands of someone who doesn’t read it is the opposite of literacy.
>I don’t want more “leverage to build and ship”, I want to live in a world where people aren’t so disconnected from reality and so lonely they have romantic relationships with a chat window; where they don’t turn off their brains and accept any wrong information because it comes from a machine; where propaganda, mass manipulation, and surveillance aren’t at the ready hands of any two-bit despot; where people aren’t so myopic that they only look at their own belly button and use case for a tool that they are incapable of recognising all the societal harms around them.
Preach. Every time I read people doing this weird LARP on this website of "you have so much more leverage, great time to be a founder" I want to put my head through the drywall.
Agree. Do we not understand how LLMs work? Some of us understand better than others, just like literacy is also not guaranteed just because you learned the alphabet.
Accepting the output of an LLM is really materially not different from accepting books, newspapers, opinion makers, academics at face value. Maybe different only in speed of access?
> LLM generated code in the hands of someone who doesn’t read it is the opposite of literacy.
"A popsi article title or paper abstract/conclusion in the mind of someone who doesn't read is the opposite of literacy."
I’m not sure I understand your point. Mind clarifying? It seems you might be trying to contradict what I said but are in fact only adding to it.
> just like literacy is also not guaranteed just because you learned the alphabet.
I didn’t claim learning the alphabet equals literacy, you did. Your argument comes down to “you’re not literate if you’re not literate”. Which, yes, of course.
> Accepting the output of an LLM is really materially not different from (…)
Multiple things can be true at once. If someone says “angry stupid people with machine guns are dangerous”, responding “angry stupid people with explosives are dangerous” does nothing to the original point. The angry stupid people are part of the problem, sure, but so are the tool which are enabling them to be dangerous. If poison is being dumped in a river and slowly killing the ecosystem, then someone else comes along wanting to dump even more of a different poison, the correct response is to stop both, not shrug it off and stop none.
What the bloody heck are you on about? That first quote is completely fabricated. I’d also like to live in a world where people don’t argue in bad faith, but since I have no pretence that will happen, at least I’m thankful when bad faith actors do such a poor job of concealing it.
But LLMs can also explain code, in fact they're fantastic at that. They can also be used to build anti-censorship, surveillance-avoidance and fact-checking tools. We are all empowered by them, it's just up to us to employ them so as to nudge society towards where we'd like it to go. Instead of giving up prematurely.
I’m not sure if the analogy is yours, but the scribe note really struck a chord with me.
I’m not a professionally trained SWE (I’m a scientist who does engineering work). LLMs have really accelerated my ability to build, ideate, and understand systems in a way that I could only loosely gain from sometimes grumpy but mostly kind senior engineers in overcrowded chat rooms.
The legality of all of this is dubious, though, per the parent. I GPL licensed my FOSS scientific software because I wanted it to help advance biomedical research. Not because I wanted it to help a big corp get rich.
But then again, maybe code like mine is what is holding these models back lol.
Sharing for advancing humanity / benefit of society, and megacorps getting rich off it, is not either-or. On the contrary, megacorps are in part how the benefit to society materializes. After all, it's megacorps that make and distribute the equipment and the software stacks I am using to write code on, that you are using to do your research on, etc.
I find the whole line of thinking, "I won't share my stuff because then a megacorp may use it without paying me the fractional picobuck I'm entitled to", to be a strong case of Dog in the Manger mindset. And I meant that even before LLM exploded, back when people were wringing their hands about Elasticsearch being used by Amazon, back in 2021 or so.
Sharing is sharing. One can't say "oh I'm sharing this for anyone to benefit", and then upon seeing someone using it to make money, say "oh but not like that!!". Or rather, one can say, but then they're just lying about having shared the thing. "OSS but not for megacorps/aicorps" is just proprietary software. Which is perfectly fine thing to work on; what's not fine is lying about it being open.
> "OSS but not for megacorps/aicorps" is just proprietary software
why? it's not like it's binary. It could well be that it's open source but can't be used by a company of X size. I'm not a lawyer but why couldn't a license have that clause? I would still class that as being open, for some definition of open
LLMs are one thing, but when you bring ES in AWS example, as outlined in the article, the problem is not the software being used; it's being _made proprietary_. It's about free and open software remaining free and open. Especially to the end user.
Basically, the selling point of LLMs is that you no longer need to think about problems, you can skip directly to results. Anything that you have to think about while using them today is somewhere on the product roadmap, or will be.
> It feels like megacorps own the keys right now, but that’s a temporary.
Remains to be seen. Hardware prices are increasing. Manufacturers are abandoning the consumer sector to serve the all consuming AI demands. Not to mention the constant attempts to lock down the computers so that we don't own them.
What does the future hold for us? Unknown. It's not looking too good though. What good is hardware if we're priced out? What good are open models and free software if we're unable to run them?
The trend I see if older hardward beeing able to run models that are increasing miniturized.
The real (but not new) danger is us giving up to the idea that we cant do it ourselves or that we must use megacorp latest shiny toy for us to "succeed"
welcome to late capital, please enjoy the ride while people are trying to tell you that LLMs are the only future (you have no future) while SOTA models can barely do shit on their own consistently outside of carefully designed benchmarks, and have to be made available at a loss otherwise no-one would use them.
On your right you can see the CEOs justifying longer hours and lower pay because AI will replace your job one day anyways, and then asking you why you aren't 10x more productive with Claude. On the left you can see the AI companies deciding who will be in charge of the fascist regime once they no-longer need workers other than for the coal mines. They reckon they can get 120 good years before they biosphere is uninhabitable, which they are worried about because what if the next LLM figures out immortally for them, maybe they will have to close the coal mines too after all.
Can't say I disagree with you. I do recognize that we seem to be heading towards a technofeudalist cyberpunk dystopia. The only way out for humanity is to automate everything to the point we transcend capitalism into a post-scarcity society where the very concept of an economy has been abolished. If we can't do that, we'll become soylent.
>But today, language belongs to everyone. We aren't losing code; we are making the ability to code a universal human "literacy."
Literacy require training though. It’s not the same to be able to make voice rendition of a text, understand what the text is about, have a critical analysis toolbox of texts, and having the habit to lookup for situated within a broader inferred context.
Just throwing LLMs into people hands won’t automatically make them able to use it in relevant manner as far as global social benefits can be considered.
The literacy issue is actually quite independent of the fact that LLMs used are distributed or centralised.
> We aren't losing code; we are making the ability to code a universal human "literacy."
LLMs making the ability to code a universal human “literacy” is like saying that Markov chain is making the ability to write a universal human “literacy”.
Cheap books too hundreds of years to be accessible. Already we have models that run on "legacy" hardware. Just like large scale publishing never disappeared large scale models and infra also wont. But does it mean that simple paper and pen was pointless to be distributed?
Look it up: victims or algerian war, victims of iran's own crackdown in 1980s, argentina's 1980s dictatorship victims... always nice rounded 30K.
I don't question the dimension of the atrocities, but the true motivations of those who perpetuate these propagandistic numbers
reply