Hacker Newsnew | past | comments | ask | show | jobs | submit | xrd's commentslogin

Elon Musk and Colossus have generated 3000 jobs in Memphis, according to Tesla propaganda (I mean "propaganda" in the original neutral term, of course: https://en.wikipedia.org/wiki/Propaganda).

But, locals don't love that because of the environmental concerns.

https://mashable.com/article/naacp-data-centers

I, for one, would not want to live within 100 miles of these data centers. But, people that live there already are not being given the choice.

And, I imagine not many of the people that live there are being offered one of those 3000 jobs.


I find it disingenuous to talk about construction jobs as being "generated". Construction jobs are contracts. Building a datacenter might put 3000 people under contract temporarily but it doesn't "generate" jobs. Once the contract is complete those contractors they're no longer paid by the builder.

The word "generated" is used to make it sound like the project opened up 3000 new permanent jobs for people. Those contractors were employed before the contract and will be employed on another contract at some later point. There's no net gain of jobs in the long run. The contractors won't even necessarily be local. The builder isn't going to call up Bob's AC repair from Collierville to do the specialized datacenter HVAC. They'll fly in a company specialized in that task who will fly home at the end of the contract.

The companies scrambling to build datacenters take advantage of that linguistic ambiguity and then the local politicians end up doing the same. They give these companies sweetheart tax/zoning incentives, proclaim contractors as "generated jobs", and then leave the locals with all of the negative externalities and none of the revenue.


Yep a local Walmart probably creates more long term (entry) jobs.

But cash strapped councils take what they can get.


I agree 100%.

Lina Khan is now in Mamdani's cabinet. Maybe NY state and California can team up on this.

OpenaAI, glaring at Larry Ellison: "hold my beer."

What is the benefit of this over lima, for example?

Lima can do a lot of what shuru does if you set it up for it. the difference is mostly in defaults and how much you have to configure upfront. with shuru you get ephemeral VMs, no networking, and a clean rootfs on every run without touching a config file. shuru run and you're in. Checkpoints and branching are built into the CLI rather than being an experimental feature you have to figure out. Lima is a much bigger and more mature project though. Shuru is something I am building partly to learn and partly because I wanted something with saner defaults for this specific use case.

Thanks for doing this. I had basically the same experience with Lima. It is very nice but the defaults are not what I want, and I don't like having to wonder whether I turned off the stuff that I don't want enabled. Better that everything is disabled by default and I selectively turn things on (like networking) as I need them.

I'm gonna give shuru a try. My main concern is being based on Alpine (seemingly the only option?) I may not be able to easily pull in the dependencies for the projects I'm working on, but I'll see how it goes.


glad to hear it, that's exactly the thinking behind it. alpine is the only option right now yeah. what kind of dependencies are you running into issues with? would help me figure out what to prioritize next.

I haven't yet - just generally I have found it a bit of a hassle to figure out which packages to install whenever I use a different distro. I'll let you know how it goes!

Disclaimer: I haven't tried this yet.

I would want the equivalent of the trixie-slim Docker image (Debian 13, no documentation). It's ~46 Mb instead of ~4Mb as a Docker image, but gives a reasonably familiar interface.

(This is largely based on some odd experiences with Elixir on Alpine, which is where I am doing most of my work these days.)


I love this.

Another way of saying it: the problem we should be focused on is not how smart the AI is getting. The problem we should be focused on is how dumb people are getting (or have been for all of eternity) and how they will facilitate and block their own chance of survival.

That seems uniquely human but I'm not a ethnobiologist.

A corollary to that is that the only real chance for survival is that a plurality of humans need to have a baseline of understanding of these threats, or else the dumb majority will enable the entire eradication of humans.

Seems like a variation of Darwin's law, but I always thought that was for single examples. This is applied to the entirety of humanity.


> The problem we should be focused on is how dumb people are getting (or have been for all of eternity)

Over the arc of time, I’m not sure that an accurate characterization is that humans have been getting dumber and dumber. If that were true, we must have been super geniuses 3000 years ago!

I think what is true is that the human condition and age old questions are still with us and we’re still on the path to trying to figure out ourselves and the cosmos.


Totally anecdotal but I think phones have made us less present, or said another way, less capable of using our brains effectively. It isn't exactly dumb but it feels very close.

I definitely think we are smarter if you are using IQ, but are we less reactive and less tribal? I'm not so sure.


There's quite a lot of research into what our increasing reliance on technology is doing to our briains.

Here is one paper: https://www.nature.com/articles/s41598-020-62877-0

"Although the longitudinal sample was small, we observed an important effect of GPS use over time, whereby greater GPS use since initial testing was associated with a steeper decline in hippocampal-dependent spatial memory. Importantly, we found that those who used GPS more did not do so because they felt they had a poor sense of direction, suggesting that extensive GPS use led to a decline in spatial memory rather than the other way around."


Modern dumb people have more ability to affect things. Modern technology, equal rights, voting rights give them access to more control than they've ever had.

That's my theory, anyway.


Majority of us are meme-copying automatons who are easily pwned by LLMs. Few of us have learned to exercise critical thinking and understanding from the first assumptions - the kind of thing we are expected to be learn in schools - also the kind of thing that still separates us from machines. A charitable view is that there is a spectrum in there. Now, with AI and social media, there will be an acceleration of this movement to the stupid end of the spectrum.

> That seems uniquely human but I'm not a ethnobiologist.

In my opinion, this is a uniquely human thing because we're smart enough to develop technologies with planet-level impact, but we aren't smart enough to use them well. Other animals are less intelligent, but for this very reason, they lack the ability to do self-harm on the same scale as we can.


Isn't defining what should not be done by anyone a problem that laws (as in legislation) are for? Though, it's not that I expect that those laws would come in time.

I remember seeing Kevin Kelly (founder of Wired) speak about 15 years ago when he was touring to promote "What Technology Wants."

He was talking about autonomous driving cars. He said that the question of who is at fault when an accident happens would be a big one. Would it be the owner of the car? Or, the developer of the software in the car?

Who is at fault here? Our legal system may not be prepared to handle this.

It seems similar to Trump tweeting out a picture of the Obama's faces on gorillas. Was it his "staffer?" Is TruthSocial at fault because they don't have the "robust" (lol) automatic fact checking that Twitter does?

If so, why doesn't his "staffer" get credit for the covfefe meme? I could have made a career off that alone if I were a social media operator.

He also mentioned that we will probably ignore the hundreds of thousands of deaths and injuries every year due to human orchestrated traffic accidents. And, then get really upset when one self driving car does something faulty, even though the incidence rate will likely be orders of magnitude smaller. Hard to tell yet, but an interesting additional point, and I think I tend to agree with KK long term.


Every few months I post this link again about how ALL comics (well, at least Japanese ones...) start from Hokusai Manga and this exhibit tells that story beautifully.

https://hokusai.anotherstory.world/en/

It's terrific, and touring for several years all over the world.


I got to visit the exhibit when it was in Boston's MFA in 2014 or so, and it was really awe-inspiring. They had wood blocks from period prints and it's just amazing what artistry was able to come from that. I got a print of Red Fuji while I was there, and it still hangs on my wall.

I think a better bet is to ask on reddit.

https://www.reddit.com/r/LocalLLM/

Everytime I ask the same thing here, people point me there.


These models are so powerful.

It's totally possible to build entire software products in the fraction of the time it took before.

But, reading the comments here, the behaviors from one version to another point version (not major version mind you) seem very divergent.

It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

But it also feels like you have to be diligent about adopting new models (even same family and just point version updates) because they operate totally differently regardless of your prompt and agent files.

Imagine managing a team of software developers where every month it was an entirely new team with radically different personalities, career experiences and guiding principles. It would be chaos.

I suspect that older models will be deprecated quickly and unexpectedly, or, worse yet, will be swapped out with subtle different behavioral characteristics without notice. It'll be quicksand.


I had an interesting experience recently where I ran Opus 4.6 against a problem that o4-mini had previously convinced me wasn't tractable... and Opus 4.6 found me a great solution. https://github.com/simonw/sqlite-chronicle/issues/20

This inspired me to point the latest models at a bunch of my older projects, resulting in a flurry of fixes and unblocks.


From the project description here for your sqlite-chronicle project:

> Use triggers to track when rows in a SQLite table were updated or deleted

Just a note in case its interesting to anyone, sqlite compatible Turso database has CDC, a changes table! https://turso.tech/blog/introducing-change-data-capture-in-t...


I have a codebase (personal project) and every time there is a new Claude Opus model I get it to do a full code review. Never had any breakages in last couple of model updates. Worried one day it just generates a binary and deletes all the code.

No version control?

I was being facetious, I mean one day models might skip the middle man of code and compilation and take your specs and produce an ultra efficent binary.

Musk was saying that recently but I don't see it being efficient or worthwhile to do this. I could be proven brutally wrong, but code is language; executables aren't. There's also no real reason to bother with this when we have quick-compiling languages.

More realistically, I could see particular languages and frameworks proving out to be more well-designed and apt for AI code creation; for instance, I was always too lazy to use a strongly-typed language, preferring Ruby for the joy of writing in it (obsessing about types is for a particular kind of nerd that I've never wanted to be). But now with AI, everything's better with strong types in the loop, since reasoning about everything is arguably easier and the compiler provides stronger guarantees about what's happening. Similarly, we could see other linguistic constructs come to the forefront because of what they allow when the cost of implementation drops to zero.


You can map tokens to CPU instructions and train a model on that, that's what they do for input images I think.

I think the main limitation on the current models is not that cpu instructions aren't cpu instructions (even though they can be with .asm), it's that they are causal, the cpu would need to generate a binary entirely from start to finish sequentially.

If we learned something over the last 50 years of programming is that that's hard and that's why we invented programming languages? Why would it be simpler to just generate the machine code, sure maybe an LLM to application can exist, but my money is in that there will be a whole toolchain in the middle, and it will probably be the same old toolchain that we are using currently, an OS, probably Linux.

Isn't it more common that stuff builds on the existing infra instead of a super duper revolution that doesn't use the previous tech stack? It's much easier to add onto rather than start from scratch.


Those CPU instructions still need to be making calls out to things, though. Hallucinated source code will reveal its flaws through linters, compiler errors, test suites. A hallucinated binary will not reveal its flaws until it segfaults.

Programs that pass linters, compile and test suites can still segfault. A good test harness that test the binary comprehensively can limit this. The model could be trained to have patterns of efficient assembly it uses rather than source code.

I’ve thought an interesting outcome might be that it’s not even that there’s a binary generated. It’s just user input -> machine code LLM -> CPU. Like the only binary would be the LLM itself and it’s essentially mimicking software live. The paper “Diffusion as a Model of Environment Dream” (DIAMOND) is close to what I’m thinking, where they have a diffusion model generate frames of a game, updating with user input, but there’s no actual “game” code it’s just the model.

https://diamond-wm.github.io/

Like you’d have a machine code LLM that behaves like software but instead of a static binary being executed it’s just the LLM itself “executing” on inputs and precious state. I’m horrible at communicating this idea but hopefully the gist is there.


Exactly this it serves little purpose.

You're going to need to spend crazy compute just compiling and obtaining training data. And until it's oneshotting absolutely everything. You're going to be asking it what it's it doing and then it'll be "uncompiling" it's code, I can't see this being more efficient than the other way compiling.

I suspect the actual benefit would be more in virtualised interfaces such as Genie 3, skipping this step altogether. Where it's just manipulating pixels and the pixels change based on the underlying statistical model output rather than old school computation.


This may seem obvious, but many people overlook it. The effect is especially clear when using an AI music model. For example, in Suno AI you can remaster an older AI generated track with a newer model. I do this with all my songs whenever a new model is released. It makes it super easy to see the improvements that were made to the models over time.

I continue to get great value out of having claude and codex bound together in a loop: https://github.com/pjlsergeant/moarcode

They are one, the ring and the dark lord

And there was many a chuckle at the Geminicide

I keep giving the top Anthropic, Google and OpenAI models problems.

They come up with passable solutions and are good for getting juices flowing and giving you a start on a codebase, but they are far from building "entire software products" unless you really don't care about quality and attention to detail.


That is my experience too. I don't know what others are building but the more novel the task is the worse these models perform.

> I don't know what others are building

Don't ask a man about his salary, a woman about her age or an AI evangelist about results from their 1000x productivity boosted workflow.


Yeah I keep maintaining a specific app I built with gpt 5.1 codex max with that exact model because it continues to work for the requests I send it, and attempts with other models even 5.2 or 5.3 codex seemed to have odd results. If I were superstitious I would say it’s almost like the model that wrote the code likes to work on the code better. Perhaps there’s something about the structure it created though that it finds easier to understand…

> It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

In my experience it’s more like idiot savant engineers. Still remarkable.


Its like getting access to an amazing engineer, but you get a new individual engineer each prompt, not one consistent mind.

Sushy dinner? What are you building with AI, a calculator?

I have long suspected that a large part of people's distaste for given models comes from their comfort with their daily driver.

Which I guess feeds back to prompting still being critical for getting the most out of a model (outside of subjective stylistic traits the models have in their outputs).


You still need a human (working at human speed) to review every generated line, if it’s not a throwaway app or some demo to impress investors.

"These models are so powerful."

Careful.

Gemini simply, as of 3.0, isn't in the same class for work.

We'll see in a week or two if it really is any good.

Bravo to those who are willing to give up their time to test for Google to see if the model is really there.

(history says it won't be. Ant and OAI really are the only two in this race ATM).


The article mentioned notarizing and stapling as problems with prior frameworks. What's the story here? If you don't use xcode as your ide (and I don't see that this project management is happening inside xcode), Apple makes that stuff really hard. And windows is easier but still hard to automate in CI. If this framework offers better solutions I'm all ears.

most use cases are supported out of the box. you just have to set a few env vars

and then build with "notarize: true" in your config... and it pretty much just works

i've signed and notarized things with electrobun and it's perfectly fine. it also gives you escape hatches in case you're doing something more complicated

EDIT: in case i can help you with anything there, feel free to DM me! or join the electrobun discord. i'm very active there. (im not affiliated with EB. just know the struggle of apples notarization system)


Thanks, that's great! Very generous.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: