Hacker Newsnew | past | comments | ask | show | jobs | submit | cpuguy83's commentslogin

Homebrew would like a word.

Homebrew wouldn't support Haiku anyway.

Mostly relevant for folks on macOS, and I skip on it when using Mac anyway rather using UNIX and SDK tools in the box, so kind of debatable.


There is also https://microvm-nix.github.io/microvm.nix/ if you want increased isolation.

I can recommend MicroVM.nix, since it allows for multiple VM runtimes like QEMU, Firecracker, etc.

There's also nixos-shell for ad-hoc virtual machines: https://github.com/mic92/nixos-shell


Can you do those ad-hoc though? I was looking into this too. I feel like it requires a system config change, apply, and then you need to do container start + machinectl login to actually get a shell.

That's definitely what I want... most of the time.


Yes, NixOS containers can be run in:

* declarative mode, where your guest config is defined within your host config, or

* imperative mode, where your guest NixOS config is defined in a separate file. You can choose to reuse config between host and guest config files, of course.

It sounds like you want imperative containers. Here's the docs: https://nixos.org/manual/nixos/stable/#sec-imperative-contai...


Oh I totally missed that!

This attack was not mitigated by hash pinning. The setup-trivy action installs the latest version of trivy unless you specify a version.

Oh, I was referring to `aquasecurity/trivy-action` that was changed with a malicious entrypoint for affected tags. Pinned commits were not affected.

Give https://github.com/project-dalec/dalec a look. It is more declarative. Has explicit abstractions for packages, caching, language level integrations, hermetic builds, source packages, system packages, and minimal containers.

Its a Buildkit frontend, so you still use "docker build".


No it doesn't. If the content of a url changes then the only way to have reproducibility is caching. You tell nix the content hash is some value and it looks up the value in the nix store. Note, it will match anything with that content hash so it is absolutely possible to tell it the wrong hash.


Not having a required input, say when you try to reproduce a previous build of a package, is a separate issue to an input silently changing when you go to rebuild it. No build system can ensure a link stays up, only that what's fetched hasn't changed. The latter is what the hash in nix is for. If it tries to fetch a file from a link and the hash doesn't match, the build fails.

Flakes, then, run in a pure evaluation mode, meaning you don't have access to stuff like the system triple, the current time, or env vars and all fetching functions require a hash.


Buildkit has the same caching model. That's what I'm saying. It doesn't force you to give it digests like nix functions often do but you can (and should).


Producing different outputs isn't dockerfile's fault. Dockerfile doesn't enforce reproducibility but reproducibility can be achieved with it.

Nix isn't some magical thing that makes things reproducible either. nix is simply pinning build inputs and relying on caches. nixpkgs is entirely git based so you end up pinning the entire package tree.


I don't agree that the code is cheap. It doesn't require a pipeline of people to be trained and that is huge, but it's not cheap.

Tokens are expensive. We don't know what the actual cost is yet. We have startups, who aren't turning a profit, buying up all the capacity of the supply chain. There are so many impacts here that we don't have the data on.


Writing code is cheaper than ever. Maintaining it is exactly the same as ever and it scales with the LOC.

Code is still liability but it's undeniable that going from thought to running code is very cheap today.


You completely ignored the post you're replying to.

To recap, the author disagrees that writing code is cheap, because we've collectively invested trillions of dollars and redirected entire supply chains into automating code generation. The externalities will be paid for generations to come by all of humanity; it's just not reflected in your Claude subscription.


GP is not totally ignoring the post he replied to: we have models that are basically 6-months behind closed SOTA models and that we can run in the cloud and we fully know how much these costs to run.

The cat is out of the bag: compute shall keep getting cheaper as it's always been since 60 years or something.

It's always been maintenance that's been the killer and GP is totally right about that.

And if we look at a company like Cloudflare who basically didn't have any serious outage for five years then had five serious outages in six months since they drank the AI kool-aid, we kinda have a first data point on how amazing AI is from a maintenance point of view.

We all know we're generating more lines of underperforming, insecure, probably buggy, code than ever before.

We're in for a wild ride.


> compute shall keep getting cheaper as it's always been since 60 years or something

past success is not a strong indicator for future success.


Maintaining it is becoming more costly. The increasing burden of review on FOSS maintainers is one example. AWS going down because an agent decided to re-write a piece of critical infrastructure is another. We are rapidly creating new kinds of liability.


This burden of review will go down as FOSS maintainers involve AI more.


unlikely, FOSS is mostly driven by zero-cost maintenance but AI tools needs money to burn. So only few FOSS project will receive sponsored tools and some definitely reject to use by ideological reasons (for example it could be considered as poison pill from copyright perspective).


> We don't know what the actual cost is yet.

We kind of do? Local models (thought no state of the art) set a floor on this.

Even if prices are subsidized now (they are) that doesn't mean they will be more expensive later. e.g. if there's some bubble deflation then hardware, electricity, and talent could all get cheaper.


This used to be option exposed in settings.


This just isn't true anymore (besides the green).


qemu has a microvm machine profile, also boots in ms.

There are also tooling on Linux to do containers as microvm's, long before Apple containers were a thing.


And yet Amazon spent a ton of time and money writing Firecracker from scratch for their workloads. Why is that?


Multiple reasons:

1. Firecracker is still a smaller more deliberate surface area 2. qemu didn't have a microvm type at the time. Firecracker was the impetus for it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: