This might end up being the forcing function (quoting myself from another reply in this discussion):
> It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
If memory was cheap, then maybe you could say, "who cares".
Can you even make the standard library dynamically linked in the C way??
In C, a function definition usually corresponds 1-to-1 to a function in object code. In Rust, plenty of things in the stdlib are generic functions that effectively get a separate implementation for each type you use them with.
If there's a library that defines Foo but doesn't use VecFoo>, and there are 3 other libraries in your program that do use that type, where should the Vec functions specialized for Foo reside? How do languages like Swift (which is notoriously dynamically-linked) solve this?
You can have an intermediate dynamic object that just exports Vec<Foo> specialized functions, and the three consumers that need it just link to that object. If the common need for Vec<Foo> is foreseeable by the dynamic object that provides Foo, it can export the Vec<Foo> functions itself.
Your apt update would still be huge though. When the dependency changes (eg. a security update) you’d be downloading rebuilds of 20 apps. For the update of a key library, you’d be downloading your entire distribution again. Every time.
Oh, well yeah, statically linked binaries have that downside. I guess I don't think that's a big deal, but I could maybe imagine on some devices that are heavily constrained that it could be? IDK. Compression is insanely effective.
You are forgetting about elephant in the room - if every bug require rebuild of downstream then it is not only question of constraint it is also question of SSD cycles - you are effectively destroying someone drive faster. And btrfs actually worsens this problem - because instead of one Copy on Write of library you now have 2n copies of library within 2 copies of different apps. Now (reverting/ø) update will cost you even more writes. It is just waste for no apparent reason - less memory, less disk space.
"compression is insanely effective" - And what about energy? compression will increase CPU use. It will also make everything slower - slower than just plain deduplication. Also, your reason for using worse for user tech is: the user can mitigate in other ways? This strikes me as the same logic as "we don't need to optimize our program/game, users will just buy better hardware" or just plain throwing cost to user - this is not valid solution just downplaying of the argument.
If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.
Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.
One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.
We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.
Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."
NixOS "suffers" from this. It's really not that bad if you have solid bandwidth. For me it's more than worth the trade off. With a solid connection a major upgrade is still just a couple minutes.
I think you misunderstand my point. Nix basically forces dynamic linking to be more like static linking. So changing a low level library causes ~everything to redownload.
> It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
If memory was cheap, then maybe you could say, "who cares".
Unfortunately memory isn't cheap these days