Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What it means that Ubuntu is using Rust (smallcultfollowing.com)
189 points by zdw 4 days ago | hide | past | favorite | 301 comments
 help



Also, Ubuntu using a non-GPL licensed userland means they can pull all kinds of tricks to allow more TiVoization in the Linux ecosystem.

Combine this with what Amutable (systemd guys) are building, and you can have monolithic, closed source, non-user-modifiable Linux distributions or flavors.

Ubuntu and companies which embed Linux into their products will love this from a business perspective.

Consider: An end to end signature-enabled, verified, attestable, Linux environment with completely closed source util-linux and userland packages, down to the "ls" and "cd". Deliciously apocalyptic.

We're two stops away from this, and there are no shortage of momentum or funding to enable teh future.


Sure, but util-linux and the BSDs won't suddenly cease to exist. If you don't like what Ubuntu is doing, just don't use it.

Upstream debian has been much more stable for as long as Ubuntu has existed...


Sure, but util-linux and the BSDs won't suddenly cease to exist. If you don't like what Ubuntu is doing, just don't use it.

And then websites and applications stop working if you're not using a verified, attested, locked-down OS and you're stuck with your nice free software system that will not do your online banking, let you chat with your friends, or access your company resources.


At that point I'll just move into the woods with a typewriter and chat with my friends via HAM radio

Edit: Also, why would some userspace components in a slightly-less-free license cause this to happen? if the powers-that-be want to shut you out of the internet, they can do it now; lots of proprietary software already exists.


Also, why would some userspace components in a slightly-less-free license cause this to happen?

It won't, in itself, but it appears to be yet another little push forward on the slippery slope that probably will end where it appears to inevitably end.


but again, the bsd userspace already has a permissive license. if the mustache twirling villains want to lock down stuff, they can do it now. they don't need any push forward.

Yeah, but people don't really want to use the BSD userspace. A lot of the Linux stuff people want to build on assumes a GNU userland and it's not trivial to build a BSD/Linux that actually does relevant computer stuff.

But in places where that stuff isn't relevant, we already see a lot of locked-down devices like the Nintendo Switch and PlayStation based on BSD precisely because they can leverage free software but still lock it down. macOS with its BSD userland is also kind of like this -- the OS is getting gradually more locked down over time, but the frog boils slowly.

If you tighten the screws too hard and fast then people will scream and yell and maybe leave your business for a competitor -- even though it's technically feasible, that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.

The argument is basically that making it easier to lock down general purpose computing devices like desktop computers (by, for example, making a non-GPL drop-in replacement for GNU *utils) will eventually aid in making it happen. The powers that be will use tried-and-true arguments about security and think-of-the-kids etc to make it seem like running a mutable, untrusted OS is an unacceptable risk.


>that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.

If you have too much non-standard stuff going on in your browser or mobile device, this is already happening, to a degree. Not a hard block, but increasing difficulties


People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.

Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).

If this was going to happen, it would have already happened. The pieces are all there already.


People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.

This is absolutely true. I'm saying someone should care, because it does matter.

Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).

For some businesses, losing 1% of your customers is actually a lot of customers and a lot of money, and all else being equal they would prefer to not lose them.

If this was going to happen, it would have already happened. The pieces are all there already.

No, they really aren't. Again, it's perhaps technically feasible to flip the switch, but it doesn't make business sense yet.

How many people are doing online banking without running on a fully cryptographically verifiable/attestable OS? This means everyone not using a TPM, Secure Boot, etc. This means grandpa with an old Windows 10 machine or an old Mac that perhaps he should not still be using but he doesn't care, he just wants to pay his bills. I don't have numbers of course but I bet you this starts looking like a hell of a lot more than 1% of the userbase.

There are web APIs for this sort of thing in all major browsers but no one is really using them yet. But they exist for a reason, much like Windows 11 requires a TPM for a reason, and this tech will at some point be deployed for things like online banking. Of course it will.


Yes cancel

> If this was going to happen, it would have already happened. The pieces are all there already.

Same things were said for:

    - Removal of DRM from music: Happened.
    - Age verification in the internet: Happening.
    - Locked down personal devices: Happened.
    - Total surveillance in cities: Happened.
    - Not being able to buy but only rent: Happened in many digital formats.
    - Internet activation of software: Happened.
    - Tracking individual persons real-time: Happened.
    - Browser attestation: Google is trying hard.
    - Attestation for Internet Banking: Reality in S. Korea.
etc. etc.

This resonates. The after effects of age verification and the general exclusion of freedom loving coders is going to leave me standing here in the tumbleweeds with my 90s toyota and laptop with solar panels and unregulated radio frequencies my only communication with the outside world.

Its like those movies coming true. I've already had casual user accounts frozen just for accessing via VPN, or some other inscrutable reason.


I'm with you and the only solace in this dystopia is the fact that I increasingly feel like I just don't care. I don't really like using computers anymore. I liked them when they represented freedom and creativity.

So fine, exclude me from all your platforms, there's nothing there for me. It's all bad content from bad people (or increasingly: not even people) running on bad software. I'm not giving up my freedom to partake in that, I'd rather just stop using your shit.

(But I would very much like to be able to pay my bills and buy my train tickets, so I'll play your game and have a smartphone. Fine. You win this round.).


Yeah. If it were a thing it would have happened by now. The friction to lock users down would be very bad for business.

I don't use Ubuntu anywhere, so there's no actions I need to take.

> Upstream debian has been much more stable for as long as Ubuntu has existed...

Well, I use Debian before Ubuntu has existed, and it was never unstable to begin with. I understand the value of more eyes looking into something and its advantages, but let's say, Ubuntu has acted with selfish reasons towards Debian in some cases. I personally taken side in one of these debates, even.

Yes, I follow debian-devel, and even leaded a Debian derivative distro for some time.


Meandwhile the Canonical employee who's responsible for some aspects of apt has decided to insert rust code. Because of this, and just this, Debian dropped 4 entire architectures. https://lists.debian.org/debian-devel/2025/10/msg00285.html

>I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem. ... If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port. It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.

If you think Canonical isn't going to lead Debian around by the nose on this you haven't been paying attention.


further down that thread

https://lists.debian.org/debian-devel/2025/10/msg00288.html

``` Rust is already a hard requirement on all Debian release architectures and ports except for alpha, hppa, m68k, and sh4 (which do not provide sqv). ```

It seems to me that the APT change was just a nail in the coffin of these older architectures, which would have eventually been sunset anyway, due to sqv not being available. If you really want to run some kind of Linux on these very old machines, godspeed, but you can't expect them to be maintained by a project with it's fingers in so many pies forever.


Yep. And nothing you've linked or pointed out changes the claim I made: that re: rust, Canonical employees are making the decisions, not Debian.

The thing with open source, and many industry standards like ISO and ECMA, is that who shows up gets to call the shots.

So when it isn't going into the right direction that we care, maybe more people with other mindset should join.

It is like complaining about who wins elections without bothering to cast a valid vote.


Well, it's not always true.

Look at how the proposal for making netplan the default network manager in Debian went. Not good, from Canonical's perspective.

Making /tmp behave the way systemd guys want also went not according to plan. The behavior is modified somewhat because of the discussion.

Rust's influence doesn't come from Canonical per se, but from its promise to eradicate memory related bugs. The initial hype was off the charts, but it's coming down, and the shortcomings are becoming obvious.

Canonical is trying to affect Debian, that's true, but it's not always a given.


The fact that Canonical has always been happy to ship software that they know fully well shouldn't be shipped doesn't fill me with hope that it will even work decently without causing massive issues to everyone (remember when they started to use pulseaudio? In the end it was such a mess that the solution was to abandon it).

It was rough for a while, but my debian machine still runs pulseaudio and it works pretty well. I agree that ubuntu doesn't do enough testing before releasing stuff, but I am grateful that so many people are willing to grind themselves against the bugs before they hit more conservative distributions

It's deprecated. I think most people moved on to pipewire.

Debian migrated everyone to Pipewire a while ago without people noticing it, as they intended.

Pipewire is working great.


Remove it right now

> If you don't like what Ubuntu is doing, just don't use it.

They said this for systemd too but look LFS dropped non-systemd support


So... Don't use LFS. Nothing is stopping you from using Linux with whatever init system or user space you want.

It used to be GNU/Linux for a reason, Android/Linux is surely not GPL userland and there are others as such.

There is also a reason why all the GNU/Linux competition on embedded space, including Linux Foundation's own Zephyr, aren't GPL licensed.

People seem to forget Linux is only a kernel.


> People seem to forget Linux is only a kernel.

I certainly don't and that's why I'm advocating the userspace shall stay GPL. The freedom has two pillars. Kernel and userspace. If you mow one of the two down, you lose everything.


There's nothing about using permissive licenses that reduces freedom. Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free. Comparing permissive licensing to a loss of freedom is not a valid comparison.

> Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free.

Like MinIO, Solaris, Elasticsearch, Hashicorp Suite and countless others. The versions before the license changes are healthy as a doornail. You're absolutely right.

Some of them are re-forked, some did not.

Also, sometimes that closed fork is the only viable option, making the hardware it's running on an expensive doornail. I also don't like that.

I remember using SDKs and software forked from open ones with version numbers like "1.8.7-really1.9.0-internal-thishardwareonly-special-3.2.5-unlocked" which only runs on a distro from 2006 when it's full moon on 29th of February, and the sum of digits of the date is divisible by 7 and 11 at the same time.

Can you patch this? I guess you can, but where's the source? I bet somebody deleted it by accident and it's not present anymore.

Permissive licenses don't take away the four freedoms, but add a fifth one. The ability to take the other four away. Without prior notice. This is what I don't like personally.

In short, I don't like doornails which are not actual doornails. Permissive licenses enable that freedom.


Yet there are already distros like Chimera Linux and Alpine Linux.

That train is long gone, as folks rather have business friendly FOSS projects.


If we stop fighting for anything just because someone said it's long gone, we'd have nothing.

World's history has changed through wars where some people said that winning [said war] is impossible.

Nothing is set in stone. World is changing more drastically than ever. Assuming that we can't change things or things will stay a certain way is a funny fallacy, at best.

Permanence is an illusion. The pendulum is on the move. It might be moving in a way I don't like, but it can't continue like that forever. I'm just doing what I feel right. I'd rather die trying than regretting that I didn't try.


Well, that is why I dislike Proton, but hey games! Courtesy of Microsoft's ecosystem.

Folks rather have folks friendly FOSS projects :)

History probably says I'm being naive, but I feel like I don't hate this possibility (hear me out!).

Personally, I'd always choose to use a distribution with open source userland packages and utils, but if closed source alternatives exist and conform to the same specifications (i.e. we get "embrace" without the "extend and extinguish") then I don't mind if a company has closed sourced tech, especially if it'll help there business case, potentially boosting funding for open source linux projects.

Maybe that's all naive, I guess we'll find out if ubuntu really do go for more and more closed source options.


I understand the optimism, but after being burned by what Microsoft did to the Linux community for the last 20+ years, I'll just distance myself more from Ubuntu ecosystem.

When you put Snaps, Juju, uutils, etc. as a list, it all smells like a path to lockdown, not dissimilar to what RedHat did with their "unbranded" patches recently (IBM being IBM, which was unsurprising).

Also, remembering how Canonical worked together with Microsoft on some projects like WSL, which felt like "Surrender servers to Linux, and save the Windows desktop by allowing Linux run as a slave inside a VM" type of deal, I do not trust them a bit.

So, Linux is maturing, but it'll also bring a couple of very big cracks through ecosystem, and it'll be noisy and painful. Personally, I'm on Debian for the last 20+ years, and not planning to move anywhere for now.

I understand that there needs to be an economy, but money is not more important than destroying what we're standing on. Let it be physical like our planet, or virtual like the free software and the culture we built around it.


i.e. we get "embrace" without the "extend and extinguish"

This only ever happens when the party trying to EEE is fighting a losing battle. If they have the upper hand, they will always get to the extend and extinguish part. Do we think movements for user freedom have the upper hand right now?


To be fair, no one should be using Ubuntu. They are the free CD people from the 2000s. They are the Apple of Linux, marketing wins, but low quality.

They used outdated linux (Debian-family) because its lower cost to maintain.

All around, never use debian-family outside servers. Fedora is the future. Maybe OpenSUSE too. (Note these are not Arch or related to Arch)


> They used outdated linux (Debian-family) because its lower cost to maintain.

Ubuntu forks Sid, and evolves from there. They don't downstream Debian Stable.

> All around, never use debian-family outside servers. Fedora is the future. Maybe OpenSUSE too. (Note these are not Arch or related to Arch)

Daily driving Debian stable on servers and Testing on desktops for more than two decades. Testing is a rolling distribution and you install it once (ever). The only time I reinstalled it was to migrate to 64 bit architecture back in the day.

Also, considering stable to stable upgrades take 5 minutes, I have no problems with Debian Stable, either.

Fedora is nice, but it's RedHat's lab. While I have nothing against them, it's not user oriented as much as it looks. Debian Testing is much more stable than many (if not almost all) of the alternative distros, and follows versions reasonably well.

IF I want cutting edge, I can go Arch or Gentoo way. Lastly, Debian is an iceberg. Looks simple from outside, and once you start to develop it, you understand why Debian is considered one of the golden standards. The underbelly is a rich ecosystem of very well designed yet simple subsystems.


>> All around, never use debian-family outside servers. Fedora is the future.

That take in of itself also feels... uncommon?

My experience matches yours more or less, I've run both Debian (and their LTS project version at one point) and Ubuntu LTS on my servers, both have been generally okay, albeit with a snag or two along the way.

https://blog.kronis.dev/blog/debian-and-grub-are-broken

https://blog.kronis.dev/blog/debian-updates-are-broken

https://blog.kronis.dev/blog/ubuntu-lts-is-broken

Aside from a few cases of not-very-serious configurations with off the shelf hardware having issues that I get to write the occasional rant about (back when I had an "Everything is broken" section in my blog), it's been surprisingly stable otherwise.

I've had far more issues with RHEL-compatible distros (hate that they killed CentOS, Oracle Linux is sometimes weird but kinda works, outside of work stuff I'd personally reach for Rocky Linux which is a nicer experience) both when it comes to running stuff like Docker (way before Podman was even stable, RHEL-compatibles didn't play nicely with Docker when it came to SELinux and networking) and also support for slightly more uncommon consumer hardware, like my netbook touchpad didn't work at all by default on Fedora, but did work on DEB distros.

The 10 year EOL is really nice, though, and if they had something as nice as Proxmox (for free), I'd probably be using RPM distros for my hypervisors right now!

That's also kind of why I think saying that either of those don't have much of a future would be an odd statement - in my experience, both have their occasional issues but are still generally good for desktop and server use cases.

As an addendum, however, snaps suck, viva la Linux Mint for desktop, plus, Cinnamon is a nice desktop and it's still close enough to Ubuntu LTS I run on servers if I ever need that familiarity in regards to packages!


other than systems, that is

*systemd

Once upon a time Mandrake was great for consumer hardware, alongside SuSE, both kind of ignored nowadays, then came Ubuntu, which no one apparently should be using.

So we're kind of left out of options, because there is hardly another distro on Distrowatch that has a similar success rate being installed on random laptops that normies want to try GNU/Linux on.


So we will have a closed OS just like macOS and Windows but linux based. I don't see why it would stop all the other open source distros to exist.

Systemd is just another init system. People said the same thing about how it can exist with other ones in a level playing field.

By the virtue of having some motivated backers, not only they have pushed everyone out from any distro which matters or acts as a root for others, they have formed a neat little company called Amutable which produces tech allowing anyone to lockdown any installation to an immutable, untouchable state.


Yeap, systemd is just another init system existing on a level playing field. They just dare to be successful by tackling problems that people have today over trying to deliver solutions designed in 1989.

> They just dare to be successful by tackling problems that people have today over trying to deliver solutions designed in 1989.

Thanks for your input. Can you please elaborate about these problems a bit more? I'm pretty new on this Linux thing. Using for just 20 years or so, and managing a quite a few hundred servers only. systemd didn't make my life drastically different or smoother.

Oh, I also used to be a tech-lead of a Debian derivative, and also did some country-wide rollouts of the thing we developed, but I'm sure it has no addition to my already extremely limited knowledge of how things work.

Maybe this is because I'm a noob, or not using enough machines, or not have enough downtime, IDK.

Any info will be greatly appreciated, thanks.


Because well funded projects start to hire developers all over the place to add dependencies and it's very difficult to do otherwise when you have an army of salaried people who do that 40h a week.

If that means that the massive fragmentation stops and we will have an OS 95% percent of linux users install, It might not be that bad.

I run and develop on various Linux distributions and fail to see that fragmentation for that last couple of decades, sorry.

I've only used GNU/Linux since 2012, but I do think we have to face the fact that there is a fair amount of ~~choice~~ fragmentation in the ecosystem. Deb/RPM/Flatpak/Snap/PKGBUILD/Nix, GNOME/KDE/Cosmic/Cinnamon/Xfce/LXQt/MATE/Budgie/Sway/Hyprland, AppArmor/SELinux, GTK/Qt/Electron/Tauri/WxWidgets there's even distributions which use musl libc instead of GNU libc or non-systemd inits. Sure, you can just pick one and focus on it, but if someone else picks something else then they may need to duplicate some effort to get things working on their preferred setup.

When you put down your project in a sound and standards compliant way, packaging doesn't matter much. RPM and DEB automatically builds your code and packages it. DEB also has a lot of tools which allows you to make sure that everything is done correctly. I'm sure RPM has similar tools, but I didn't use them a lot.

Desktop environment doesn't matter much, because GTK and Qt works on every Linux Desktop. I'm using KDE, but I don't know which tools I use are GTK, which are Qt, etc. Qt and GTK teams collaborate a ton both in window management and desktop underpinnings side. Also there are tons of standards, and things just work if you follow them. Even the standard libraries of programming languages and Linux userland gives the tools to utilize these standards.

C libraries are mostly interoperable. I operate with GNU's C library, but aside from interesting behavioral differences, the API is not different.

If you're not writing daemons, you have no business with your init system in 99% of the cases, unless you want to utilize a special feature of any of them. You can just ship the service files. daemon() function is part of libc, not your init system.

In total, after your code builds, you can add these layers step by step, one at a time, and have a codebase which works everywhere with minimal effort.


Eroding user's rights is good if it means users have fewer choices because choice is bad? I suppose it would mean that resources could concentrate in a smaller, more focused set of software, but I really can't see how that would justify the harm caused.

Just think about how easy it would be though - imagine - one single OS, one single version always immediately up to date, one consistent set of installed software, attestation to ensure no adversaries are attempting to modify or install unsupported software, full accurate and thorough analytics, what a dream...

Yea what a Fall of Rome type dream - just look what happened when people overused a specific measure - we had Crowdstrike with around 8.5 million devices crashed to BSoD. Identical OS, identical apps, identical updates, identical crashes at same time.

If you centralise then it is not the question "Will?" but "When?" it will fail.


> Just think about how easy it would be though

Endlessly painful. Right?

The defining characteristic is that everyone is using it, not that it's your personal ideal operating system. We have a few major players trying to create their version of the one single os. They're already nobodies ideal, and they have the luxury of telling people to go elsewhere if the system isn't right for them. Imagine how much worse it would be if they had to support everything.


Isn't that what Apple purports to sell? There's also Haiku, but I don't know to what extend it matches that description.

> Also, Ubuntu using a non-GPL licensed userland means they can pull all kinds of tricks to allow more TiVoization in the Linux ecosystem.

Can we stop this conspiracy nonsense? They have explicitly stated that licensing was not a motivation, and even if you think they're lying it wouldn't make any sense anyway! No Tivoization is foiled by Coreutils being GPL. That's ridiculous for so many reasons, not least you can just use the BSD versions, as Apple does (and they still release the source code!).


Well, why not license it as GPL then? If they don't care of course.

Because GPL doesn't play well with static linking, the new favourite of programming languages rediscovering the pre-1990's ways of most operating systems linkers (aka binders).

Good question. Probably just because most Rust projects don't use GPL and they copied that. I searched but couldn't find an answer.

I wouldn't be entirely surprised if they change it to GPL just to shut people up.


> I wouldn't be entirely surprised if they change it to GPL just to shut people up.

They don't and wont.

> I searched but couldn't find an answer.

Here's the answer: https://github.com/uutils/coreutils/issues/2757. This is a link I found long time ago and saved to reference when need arises.

From the (current) lead author:

    The license has been decided way before my time. I am 0 interest in starting a license debate (I care if the license is DFSG - Debian Free Software Guidelines) and spend time on it. I would rather use my limited time to make rust/coreutils ready for production.
More debate: https://github.com/uutils/coreutils/issues/834

From what I understood, they don't "believe" in GPL and don't like the idea of "having to keep it open". They believe in Developer Freedom(TM), not User Freedom(TM), so they don't care whether their code is closed by others or not.

To summarize #834: We don't like GPL. We'll do MIT, thanks.


Lol, that won't happen. The whole point for doing this is getting rid of yet another copyleft component. Especially GPLv3 stuff, companies hate it.

> Can we stop this conspiracy nonsense?

If they can earn my trust, why not? I'm not a pointlessly stubborn person. I have changed my views in the past, and can certainly change in the future. This my view resulting from my experiences, and jury is still out from my perspective. If you want to trust Canonical and Co., you can. Don't let me stop you.

> They have explicitly stated that licensing was not a motivation, and even if you think they're lying it wouldn't make any sense anyway!

Who prevents forking Ubuntu and taking that extra mile, esp. now we have a company which wants to enable exactly that lockdown?

> No Tivoization is foiled by Coreutils being GPL.

Belts have holes. They can be used to hold or to choke. Adding more holes to a belt allows more different uses.

> That's ridiculous for so many reasons,

Can you give me more reasons to believe me that I'm a tinfoil wearing crazy weirdo?

> not least you can just use the BSD versions, as Apple does (and they still release the source code!).

Same Apple removing any GPLv3 (and possibly GPLv2) tool from their roster in every iteration from their OS. Same Apple which provides no way to verify that what's published is what's running on their hardware. Same Apple which provides SIP to seal their system partitions which can't be modified without breaking tons of guarantees and seals. Same Apple which controls from their processor to software, without any gaps.

Having the source have no meaning there. You can't use that source. You can't modify the machine you use, you can't install any other OS or just test something.


Ubuntu oozes over debian like a parasitic malaise of vile chicanery. Their entire goal is to, essentially, use the work of untold millions, provided for free, to build their own cathedral.

People complain about AWS and others taking OSS projects and profiting wildly off them, but they're all children compared to the machinations of Ubuntu.

It's been this way from the start. The constant conflict of interest, where Ubuntu devs manipulate debian democratic processes, and the inclusion of systemd and its ridiculous swiss-army-knife implementation of an init system and surrounding tools.

You ever see those knives, the huge ones, with a screwdriver, plyers, and 100 other tools on them? Thing is, they're all crap. They work for an odd case, but you always need to reach for a real tool.

That's systemd. Whether systemd-timesync, its dns services, or anything else it does, it's kiddie time. Barely functional, broken in inane ways, and leaving any professional in a situation of endless masking of a myriad of barely cogent and horribly malign services.

We didn't gain anything with systemd, except for an init system 1000 times larger, codebase wise, and a collection of tools you have to replace anyhow.

And now these guys want to do Amutable. I hope they fail, for if they succeed, they will sink us further into this absurd system. Systemd for death. For despair. For dislike. For dumb. For disregard, devilment, debased, disturbed, systemd is all these things, and more, all packaged for you, all presented to you, all given to you, to all of us, dragging us down, destroying us.

If there is an apocalypse, it'll be somehow some bug in systemd that causes it. Nukes will fly due to its broken code, viral containment systems will fail due to buffer overruns in its code, systemd is the end-of-world waiting to happen, its over-complicated, poorly written code a guillotine waiting to fall upon us all.

I run thousands of sysvinit, and thousands of systemd systems.

Which ones, do you think, have the worse record of "something stupid" bringing down a service, a machine, preventing a boot up, you name it, it's systemd.

I swear to God that Trump exists because of systemd somehow. I place all the ills, perils, at the feet of systemd. It represents everything wrong in the tech ecosystem, its tendrils spreading dark, deep disturbing dreams of madness through all it touches.

Just learning how to use systemd, destroys the logic centres of the mind, rendering advocates incapable of productive work.

All wrong and ill that befalls this world, is at its feet.

I suspect through some incomprehensible twist of fate, the entire fabric of the universe may unravel, undoing all that is, and ever was. All lost, all gone, all because of systemd.

I am beginning to suspect I dislike systemd.

(Send $19.95 to my address, if you wish to subscribe to my newsletter, and hear my REAL, UNFILTERED opinions about systemd)


I could have gotten behind the first three paragraphs until you started mentioning systemd. After that everything you've written sounds like the rambling of a crazy person.

systemd won the init system wars because it is pretty damn good from a technical perspective. The competition didn't even try to participate.

>Barely functional, broken in inane ways, and leaving any professional in a situation of endless masking of a myriad of barely cogent and horribly malign services.

Everything you're complaining about was even more true of previous init systems and barely true for systemd at all.

Meanwhile Ubuntu is a garbage fire from a technical perspective. Snaps are garbage and forced down your throat.


systemd won because of politics, and absolutely nothing more. Debian, the root of the most used tree of Linux distros, only adopted systemd due to pressure from Redhat/Gnome, and threats that if it didn't?

Gnome would no longer work on Debian.

Understand, that many of the issues "resolved" by systemd, were redhat issues. And further, not even init issues.

For example, the most predominant being "predictable NIC names", which were already a thing with Debian. Or bootup times, of which Debian had excellent parallelization and boot times of a similar scope to how "fast" systemd was.

There's really nothing good, from a technical perspective, when something is enlarged 1000x the requirement. If you look at the code for sysvinit, it's maybe 10k lines. Systemd is > 1M lines of code, likely approaching 1.5M by now. So I suppose, 100x the size.

It needs to be understood that the more code you have, the more bugs. It's just the way it is. There have been more security issues in core systemd yearly, than sysvinit in its entire lifespan. That's not even systemd's fault, it's just a simple fact, you have more code, you have more bugs.

And when you say "systemd", you're likely referring to all the inane nonsense it does? How broken it is managing mounts, which really isn't an init's job anyhow? Or the absurd nature of having shutdown and startup identical, with automatic ordering, so you end up in all sorts of ridiculous edge cases?

Why would anyone presume that start and stop MUST be mirror images of each other. The very logic is broken, and shows an immense lack of comprehension of how the real world works.

And you speak of "it won" for superior this and that? At the start, it didn't even have an easy way to extend stop time. Hell, even now it just sends SIGTERM and a nanosecond later SIGKILL, as if shutting down a box FAST FAST NOW NOW is more important that data integrity or properly closed tcp connections or processes doing any form of proper cleanup.

The number of mysql/database issues caused by this behaviour in the early days was insane.

Look, I get you like systemd. But it's provided no real value, and certainly, even if there is some? The detractors outweigh it as the sun turns meat to leather.


> There's really nothing good, from a technical perspective, when something is enlarged 1000x the requirement. If you look at the code for sysvinit, it's maybe 10k lines. Systemd is > 1M lines of code, likely approaching 1.5M by now. So I suppose, 100x the size.

The systemd repo is a mono repo for other tools in addition to the init system.

I've heard from many sysadmins and distribution maintainers that systemd has been amazing. We went from ad hoc shell scripts to declarative plain text files. I think that's a huge win.


> We went from ad hoc shell scripts to declarative plain text files. I think that's a huge win.

Current sysadmin and former distro maintainer here, who respectfully disagrees with you and your friends.

Many, if not all software packages followed a well-defined SYS-V service file stub, esp. after so-called "Parallel SYS-V". We were able to order services, define dependencies and deterministically boot systems at the speed of light. Nothing broke, and the systems fully supported "pull the plug if you want, it won't break" promise.

While I don't hate systemd, I don't like its many ways. It's something like X11 before auto-configuring support for me. The less I touch it, less grumpy I am. Technical parts aside, remembering the ugliness surrounding it (people, ecosystem and predatory aspects) makes me really angry sometimes.

Tip: Research "Amutable" and what they are up to.


The only plus for Amutable, is that it may finally cause a sane systemd fork.

My immense, strong suspicion here, is that they believe they can use their control over the systemd project, to add immense layers of code and change, to support Amutable's needs.

When this happens, there will likely be pushback of some sort. I'm hoping a fork will happen at that time, and even better, hoping that maybe the project can go someplace saner.

Getting rid of all tcp support (eg, systemd providing inetd functionality) from an init system would be an excellent start. The absurdity of pid 1 having networking hooks is absolutely madness.

Splitting start/stop ordering would be an additional benefit.

Removing all daemons, and all support code, and forking them (for legacy support) would be next. No horribly enacted timesyncd, or resolvd.

Dropping the absurd journal and returning to a syslog solution would be next. Literal kiddie town, to have no centralized logging as a default when first created. There are now attempts to entirely re-skin the cat, with systemd-journal-gatewayd, yet every single appliance and piece of hardware supports... that's right, syslog protocol, not systemd's proprietary journalling protocol or formats.

There is so much about systemd that is just about re-writing the entire universe, not for immense gain, not for immense improvement, but instead for the tiniest, smallest shred of edge-case betterment, and meanwhile, creating massive, overwhelming denigration of every other aspect of that same use case.

Has the journal improved anything for anyone, anywhere, in any real, meaningful way? Absolutely not. All searching, etc is available on text files with | grep. Zero improvement.

Has the journal improved performance? No.

And the ridiculous and absurd and inane concept of the journal being removed at each reboot?

It's as if the people writing systemd, had absolutely no real-world experience with servers, maintaining them, or working with them, and simply made design decisions predicated upon rumour, with no actual understanding of edge cases, or why things are, or were, as they are.

--

An example would be some aspects of Hyundais. They are relatively new, in many ways, to much of the market they have entered. Yes, I know, decades may not seem like that, but it is so. And until they stole all of Toyota's QA methods by hiring engineers (which also took all documentation), they were of horrible quality.

That said, I say in one of their newer SUVs, electric, the other day. Their dashboard, down at the bottom, ended in a sharp corner. When I sat in the car, I realised that should I be in an accident, or even brake aggressively, my kneecap would mash into this non-rounded, extremely square, sharp angle. I could literally see my kneecap being sliced/popped off.

This sort of "it's silly to have round everywhere, let's do something new ascetically, and make it a sharp edge down there!", coupled with "There aren't many people 6'3" in S. Korea, so we'll never notice how dangerous this is", is a prime example of this.

The authors had no idea of edge cases, and the litany of bug reports over the last decade has shown all their supposed improvements filed away, as they have basically had to conform to logical design standards, developed by people far wiser than they, over the last half century.

No, someone-new-to-the-entire-unix-ecosystem, the phrase "but we can just" isn't a viable means to determine sensible design methodology.

Go ahead, enact change, just make sure it makes some sense.


b112 wrote a substantial comment already, but I'll give a single line summary:

yes, systemd changed how I manage my systems, but it didn't bring in speed, safety or integration I didn't have before them. Moreover, they brought out secure-boot related shenanigans in house and integrated to everything it touches. Before that the line was drawn at the bootloader.


Here's the chasm I want to see Rust cross:

Dynamic linking with a safe ABI, where if you change and recompile one library then the outcome has to obey some definition of safety, and ABI stability is about as good as C or Objective-C or Swift.

Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.


> Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.

Wait, Rust can already communicate using the C ABI. In fact, it offers exactly the same capabilities as C++ in this regard (dynamic linking).


That's an unsafe ABI.

As unsafe as C or C++. In fact, safer, because only the ABI surface is unsafe, the rust code behind it can be as safe or unsafe as you want it to be.

I was addressing this portion of your comment: "C's ABI and dynamic linking are the thing that enables the software to get huge". If the C ABI is what enables software to get huge then Rust is already there.

There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.


Here's the problem. If you told me that you rebuilt the Linux userland with Rust but you used C ABI at all of the boundaries, then I would be pretty convinced that you did not create a meaningful improvement to security because of how many dynamic linking boundaries there are. So many of the libraries involved are small, and big or small they expose ABIs that involve pointers to buffers and manual memory management.

> There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.

Of course C and C++ are no safer in this regard. (Well, with Fil-C they are safer, but like whatever.)

But that misses the point, which is that:

- It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.

- So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.


> - It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.

I think we all agree that it would be a huge deal.

> - So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.

As you point out, this is the debatable part, and I'm not sure I get your justification here.


This might end up being the forcing function (quoting myself from another reply in this discussion):

> It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.

If memory was cheap, then maybe you could say, "who cares".

Unfortunately memory isn't cheap these days


Can you even make the standard library dynamically linked in the C way??

In C, a function definition usually corresponds 1-to-1 to a function in object code. In Rust, plenty of things in the stdlib are generic functions that effectively get a separate implementation for each type you use them with.

If there's a library that defines Foo but doesn't use VecFoo>, and there are 3 other libraries in your program that do use that type, where should the Vec functions specialized for Foo reside? How do languages like Swift (which is notoriously dynamically-linked) solve this?


You can have an intermediate dynamic object that just exports Vec<Foo> specialized functions, and the three consumers that need it just link to that object. If the common need for Vec<Foo> is foreseeable by the dynamic object that provides Foo, it can export the Vec<Foo> functions itself.

How much overhead is that? Also, why would that have much overhead? Things deduplicate in memory.

They dedup at the page level.

This isn’t that kind of duplication.


I thought they were suggesting the stdlib be dynamically linked or something, at which point it would be. But for static linking, no.

Your apt update would still be huge though. When the dependency changes (eg. a security update) you’d be downloading rebuilds of 20 apps. For the update of a key library, you’d be downloading your entire distribution again. Every time.

Oh, well yeah, statically linked binaries have that downside. I guess I don't think that's a big deal, but I could maybe imagine on some devices that are heavily constrained that it could be? IDK. Compression is insanely effective.

You are forgetting about elephant in the room - if every bug require rebuild of downstream then it is not only question of constraint it is also question of SSD cycles - you are effectively destroying someone drive faster. And btrfs actually worsens this problem - because instead of one Copy on Write of library you now have 2n copies of library within 2 copies of different apps. Now (reverting/ø) update will cost you even more writes. It is just waste for no apparent reason - less memory, less disk space.

"compression is insanely effective" - And what about energy? compression will increase CPU use. It will also make everything slower - slower than just plain deduplication. Also, your reason for using worse for user tech is: the user can mitigate in other ways? This strikes me as the same logic as "we don't need to optimize our program/game, users will just buy better hardware" or just plain throwing cost to user - this is not valid solution just downplaying of the argument.


If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.

Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.

One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.

We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.


Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."

Yeah I'm not really convinced that this matters at all tbh

NixOS "suffers" from this. It's really not that bad if you have solid bandwidth. For me it's more than worth the trade off. With a solid connection a major upgrade is still just a couple minutes.

A couple of minutes at the moment that is, with dynamic linking everywhere. What will it become when everything is statically linked?

I think you misunderstand my point. Nix basically forces dynamic linking to be more like static linking. So changing a low level library causes ~everything to redownload.

What you are asking for is to make a library definition replacement to .h-files that contain sufficient information to make rust safe. That is a big, big step and would be fantastic not only for rust but for any other language trying to break out of the C tar pit.

So you're calling for dynamic linking for rust native code? Because rust's safety doesn't come from runtime, it comes from the compiler and the generated code. An object file generated from a bit of rust source isn't some "safe" object file, it's just generated in a safe set of patterns. That safety can cross the C ABI perfectly fine if both things on either side came from rust to begin with. Which means rust dynamic linking.

Would a safe ABI work with sandboxing the C code? I'm a bit unsure how one would construct a safe C ABI from Rust's side,

The argument for unsafe ABI not being that big of a deal is that ABI boundaries often reflect organizational boundaries as well.

E.g. the kernel wouldn't really benefit from a "safe ABI" because users calling into the kernel need to be considered malicious by default.


How could a safe dynamic linking API ever work?

I think you're moving the goalposts significantly here.


I don’t think GP is moving the goalposts at all, rather I think a lot of people are willfully misrepresenting GP’s point.

Rust-to-rust code should be able to be dynamically linked with an ABI that has better safety guarantees than the C ABI. That’s the point. You can’t even express an Option<T> via the C ABI, let alone the myriad of other things rust has that are put together to make it a safe language.

You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

It would be very hard to accomplish. Apple was extremely motivated to make Swift have a resilient/stable ABI, because they wanted to author system frameworks in swift and have third parties use them in swift code (including globally updating said frameworks without any apps needing to recompile.) They wanted these frameworks to feel like idiomatic swift code too, not just be a bunch of pointers and manual allocation. There’s a good argument that (1) Rust doesn’t consider this an important enough feature and (2) they don’t have enough resources to accomplish it even if they did. But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.


> You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

> It would be very hard to accomplish.

Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.


Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.

> You can’t even express an Option<T> via the C ABI

But you can express Option<Foo> for a concrete Foo. Do you really need any more than that?


> But you can express Option<Foo> for a concrete Foo

I don’t think that’s true?

https://users.rust-lang.org/t/option-is-ffi-safe-or-not/2982...

You could maybe say that a pointer can be transmuted to an Option<&T> because there’s an Option-specific optimization that an Option<&T> uses null as the None value, but that’s not always guaranteed. And it doesn’t apply to non-references, for instance Option<bool>’s None value would be indistinguishable from false. You could get lucky if you launder your Option<T> through repr(C) and the compiler versions match and don’t mangle the internal representation, but there’s no guarantees here, since the ABI isn’t stable. (You even get a warning if you try to put a struct in your function signatures that doesn’t have a stable repr(C).)


You're right that there isn't a single standard convention for representing e.g. Option<bool>, but that's just as true of C. You'd just define a repr(C) compatible object that can be converted to or from Option<Foo>, and pass that through the ABI interface, while the conversion step would happen internally and transparently on both sides. That kind of marshaling is ubiquitous when using FFI.

> but that's just as true of C

Right, that's the whole point of this thread. The only stable ABI rust has is one where you can only use C's features at the boundaries. It would be really nice if that wasn't the case (ie. if you could express "real" rust types at a stable ABI boundary.)

As OP said, "I don't think deflecting by saying "but C is no safer" is super interesting". People seem intent on steering that conversation that way anyway, I guess.


> I don’t think GP is moving the goalposts at all

Thank you :-)

> It would be very hard to accomplish.

Yeah it's a super hard problem especially when you provide safety using the type system!

The work the Swift team did here is hella impressive.

> But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.

Yeah!


> How could a safe dynamic linking API ever work?

Fil-C solves it. I think Swift solves it, too.

So it's solvable.

No fundamental reason, that I know of, why Rust or any other safe language can't also have some kind of story here.

> I think you're moving the goalposts significantly here.

No. I'm describing a problem worth solving.

Also, I think a major chasm for Rust to cross is how defensive the community gets. It's important to talk about problems so that the problems can be solved. That's how stuff gets better.


Swift and fil-c are only pseudo safe. Once you deal with the actual world and need to pass around data from memory things are always unsafe since there is no safe way of sharing memory. At least not in our current operating systems. Swift and fil-c can at least guard to some extent the api.

A safe ABI would be cool, for sure, but in the market (specifically addressing your prediction) I don't know if it's really that big a priority for adoption. The market is obviously fine with an unsafe ABI, seeing how C/C++ is already dominant. Rust with an unsafe ABI might then not be as big an improvement as we would like, but it's still an improvement, and I feel like you're underestimating the benefits of safe Rust code as an application-level frontline of security, even linked to unsafe C code.

What is a safe ABI? An ABI can't control whether one or both parties either end of the interface are honest.

You can't have safe dynamic linking, dynamic linking requires you to trust the library you load with no ability to verify.


> An ABI can't control whether one or both parties either end of the interface are honest.

You are aware that Rust already fails that without dynamic linking? The wrapper around the C getenv functionality was originally considered safe, despite every bit of documentation on getenv calling out thread safety issues.


Yes? That's called a bug? The standard library incorrectly labelled something as safe, and then changed it. The root was an unsafe FFI call which was incorrectly marked as safe.

It's no different than a bug in an unsafe pure Rust function.

I'm choosing to ignore that libc is typically dynamically linked, but linking in foreign code and marking it safe is a choice to trust the code. Under dynamic linking anything could get linked in, unlike static linking. At least a static link only includes the code you (theoretically) audited and decided is safe.


A "safe" ABI is just a C ABI plus a "safe" Rust crate (the moral equivalent to a C/C++ header file) that wraps it to provide safety guarantees. All bare-metal "safe" FFI's are ultimately implemented on top of completely "unsafe" assembly, and Rust is not really any different.

C++ ABI stability is the main reason improvements to the language get rejected.

You cannot change anything that would affect the class layout of something in the STL. For templated functions where the implementation is in the header, ODR means you can't add optimizations later on.

Maybe this was OK in the 90s when companies deleted the source code and laid off the programmers once the software was done, but it's not a feature Rust should ever support or guarantee.

The "stable ABI" is C functions and nothing else for a very good reason.


I think if Rust wants to evolve even more aggressively than C++ evolves, then that is a chasm that needs to be crossed.

In lots of domains, having a language that doesn't change very much, or that only changes very carefully with backcompat being taken super seriously, is more important than the memory safety guarantees Rust offers.


In my view, this is a good thing.

As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.

Rust preventing this makes my life so much better.


Rust does not prevent you from creating a library that exports a C/C++ interface. It's indistinguishable from a C or C++ library, except that it's written in Rust. cbindgen will even generate proper C header files out of the box, that Rust can then consume via bindgen.

> As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.

I mean yeah that's bad.

> Rust preventing this makes my life so much better.

I'm talking about a different issue, which is: how do you create software that's in the billions of lines of code in scale. That's the scale of desktop OSes. Probably also the scale of some other things too.

At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.

This is true even and especially if everyone has access to everyone else's source code


> At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.

Rust supports ABI compatibility if everyone is on the same compiler version.

That means you can have a distributed caching architecture for your billion line monorepo where everyone can compile world at all times because they share artifacts. Google pioneered this for C++ and doesn't need to care about ABI as a result.

What Rust does not support is a team deciding they don't want to upgrade their toolchains and still interoperate with those that do. Or random copy and pasting of `.so` files you don't know the provenance of. Everyone must be in sync.

In my opinion, this is a reasonable constraint. It allows Rust to swap out HashMap implementations. In contrast, C++ map types are terrible for performance because they cannot be updated for stability reasons.


My understanding: Even if everyone uses the same toolchain, but someone changes the code for a module and recompiles, then you're in UB land unless everyone who depends on that recompiles

Am I wrong?


If your key is a hash of the code and its dependencies, for a given toolchain and target, then any change to the code, its dependencies, the toolchain or target will result in a new key unique to that configuration. Though I am not familiar with these distributed caching systems so I could be overlooking something.

That's not the issue I'm worried about

> At that scale, you can't just give everyone the source and tell them to do a world compile.

Firstly, of course you could.

Secondly, you don't even need to, as NixOS shows.


C++ is still changing quite a lot though, just not in ways that fix the existing issues (often because doing so would break ABI stability).

That is a reason why a lot of folks stick with C.

In some sense, the chasm I'm describing hasn't been crossed by C++ yet


I'm not sure I'm following, are you claiming that C++ is still not widely used enough? That doesn't seem to be the case.

Except as you well know, C might not change as fast, but it does change, including the OS ABI.

Those folks think it doesn't.


> Except as you well know, C might not change as fast, but it does change, including the OS ABI.

I don't know that.

Here's what I know: the most successful OSes have stable OS ABIs. And their market share is positively correlated with the stability of their ABIs.

Most widely used: Windows, which has a famously stable OS ABI. (If you wanted to be contrarian you could say that it doesn't because the kernel ABI is not stable, but that misses the point - on Windows you program against userland ABIs provided by DLLs, which are remarkably stable.)

Second place: macOS, which maintains ABI stability with some sunsetting of old CPU targets. But release to release the ABI provides solid stability at the framework level, and used to also provide stability at the kernel ABI level (not sure if that's still true - but see above, the important thing is userland framework ABI stability at the end of the day).

Third place: Linux, which maintains excellent kernel ABI stability. Linux has the stablest kernel ABI right now AFAIK. And in userland, glibc has been investing heavily in ABI stability; it's stable enough now that in practice you could ship a binary that dynlinks to glibc and expect it to work on many different Linuxes today and in the future.

So it would seem that OS ABIs are stable in those OSes that are successful.


Speaking of Windows alone, there are the various calling conventions (pascal, stdcall, cdecl), 16, 32, 64 bits, x86, ARM, ARM64EC, DLLs, COM in-proc and ext-proc, WinRT within Win32 and UWP.

Leaving aside the platforms it no longer supports.

So there are some changes to account for depending on the deployment scenario.


The most stable would be FreeBSD with compaNx libraries/modules for old binaries, where N = FreeBSD version number.

I think it's the domains that need to evolve, because the effects of that approach have been very bad for a very long time already

Isn’t this solution solved by just compiling your libraries with your main app code? Computers are fast enough that this shouldn’t be a huge issue.

This assumes a lot:

- the same entity has access to the source of both the library and the main app

- library and main app share the same build tooling

And even if that’s the case, you have the problem of end users accidentally using different versions of the main app and the library and getting unexpected UB.


In what way rust needs to evolve? It seems pretty evolved to me already but I’m no language expert

What's the stat of single-compiler version ABI? I mean - if the compiler guaranteed that for the same version of the compiler the ABI can work, we could potentially use dynamic linking for a lot of things (speed up iterative development) without committing to any long term stable API or going through C ABI for everything.

I think the way to fix this is:

1. Have the stable ABI be opt-in similarly to how the C ABI is opt-in in Rust (`#[repr(stable)]` or similar)

2. Have the stable ABI be versioned. So it would actually be `#[repr(stable_2026)]` or whatever


The big question is does Rust want to play being adopted by those vendors, or it would leave them alone with languages that embrace native libraries.

> Here's the chasm I want to see Rust cross:

That's not important. What I want to see is the Rewrite-it-in-Rust movement move towards GPL.

GPL is pro-user. MIT is pro-business.

In their zeal to convert, they are happily replacing pro-user software with pro-business software. Their primary goal is to convert, not to safeguard.

If they shifted their goal from spreading Rust to protecting users, I'd be a lot happier about the community.


> In their zeal to convert, they are happily replacing pro-user software with pro-business software.

This is one of the two main reasons I'm not using Rust. Second reason is being addressed by gccrs team, so I have no big gripes there, since they are progressing well.


By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed? Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.

> By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed?

It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.

The last significant online C community was the one that gave us the pro-user products in the first place.

> Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.

I don't care whether or not they are hostile, that is not relevant. What is relevant to the complaints you are reading is that their primary goal is the spread of Rust, not the interests of the users.

It is totally reasonable to be against a community who are working very hard to replace pro-user software with pro-business software.


> The last significant online C community was the one that gave us the pro-user products in the first place.

You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.

> It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.

It's clear that you have no idea what you're talking about. There is no "rewrite-it-in-Rust community", there are just people using Rust and writing what they want. That copyleft licenses have lost mindshare to permissive licenses in the decades since the rise of the OSI is a broader movement in OSS that long predates Rust, and has nothing to do with Rust itself.


> You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.

Sure, C played a great part there too, but you are ignoring the present.

What we are seeing now is a concerted effort to replace pro-user products with pro-business products.

Even if you re right that the start of Copyleft, with gcc, is revisionist history, that has no relevance to what is happening now, which is a large effort by a specific community to replace pro-user products with pro-business products.

>


> Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.

Acta, non verba.


Couching a non-sequitur in Latin does not an argument make. By all means, have the courage to make an actual statement.

> have the courage to make an actual statement.

Well, that's funny. Considering all the comments I have written for this submission.

First of all, most of the arguments I'd make is already addressed by lelanthran. Do I need to write the same things over and over? It's bad etiquette to write the same things said by someone else. This is why we have the voting mechanism here.

So, since you insist, let me reiterate the same thing.

No I don't refuse to use C, because most of the GPL software which is enabling everything we do today is written in C or a C-descendant language. However, as I write everywhere, I refuse to use Rust because of two reasons:

1- LLVM only for now (I don't use any language which doesn't have a compiler in GCC) 2- Rust's apparent rewrite in rust, in MIT, replace the thing and beat it with a club if it refuses to die attitude.

For reference, uutils and sister projects use "drop-in-replacement" and "completely replace" leisurely, signaling their clear intentions to forcefully replace GPL code with more permissive, business-friendly bits.

I tend to reluctantly accept Rust in the Kernel since gccrs is in the works and progressing steadily, and Rust guys are somewhat forced to write a proper reference for their language and back it with proper PLT, since it's a hard requirement if you want your programming language to be a long-living, dependable one.

Similarly, you use words like courage and non-sequitur leisurely. I'm not sure it's fitting in this instance.


I think this makes you implicitly are a part of this trend, because even less pro user software exists in Rust because of your decision

Seriously, that's a good point. I'll seriously consider my position when gccrs becomes a bit more mature.

Thanks for your reply.


There is absolutely nothing "pro-business" about permissive licenses. People choose permissive licenses for all kinds of reasons. For example, I personally use them because I believe they are more free and thus more in line with my values. You shouldn't project unsubstantiated statements onto people's motives like this.

With permissive licenses you often run into the following situation:

You buy something physical from a company, say a humanoid unitree robot, a robot actuator or Arm SBC. These pieces of hardware come with their own proprietary SDK that they sell for a significant fee or a proprietary GPU driver without any hope of updates. The SDK heavily uses MIT licensed code and there is no possibility of modifying or inspecting the code for debugging.

From the perspective of the user, the system might as well be 100% proprietary and his freedoms are maximally restricted. You could say that this is fine since it doesn't detract from the original open source project, but you have to remember that these companies would ordinarily have to pay significant development fees to build the same level of functionality and they have no obligation to help or support your project financially. You as the open source developer will then have to beg them to hire you, so you can do paid work that is unrelated to the original project to finally work on your project in your spare time, purely because it is possible to charge for hardware but not the software that the hardware depends on.

What I'm trying to get at here is that this means full vertical integration is the only way. The problem is that most hardware companies are hardware companies first and they don't care about software. They concentrate on making hardware, because each sale brings in money. They don't spend money on software, because it appears to be optional. You can just tell the customer or an open source community to bring their own software. The money that is needed to pay for open source projects flows through the very companies that refuse to spend money on software.

If you want to write open source software, you must be a hardware company so you are customer facing and have access to customer money that can be diverted to the development of the software.


> You shouldn't project unsubstantiated statements onto people's motives like this.

I am not criticising their motives, I am criticising the result!

Also, definitions are hard. It's why we have pro-choice/pro-life and not anti-choice/anti-life - using the positive spin is a good faith characterisation of a position.

In much the same way, I am using pro-user/pro-business; if my intention was to vilify one of those positions I would have used pro-user/anti-user or pro-business/anti-business to label those positions.

No reasonable interpretation of pro-user/pro-business can make the audience think that I am unfairly characterising either of two positions.

I say this to address the use of the word "unsubstantiated" in your assertion about my characterisations.


Yup. Work hand in hand with FSF. Use GPLv3. No. it is about fat binaries that are just blobs without any introspection and ownership.

That would be great, but Rust relies on compile-time monomorphization for efficiency (very much like C++, if you consider templates polymorphic functions/classes).

This means that any Rust ABI would have to cater for link-time specialization. I think this should be doable, but it would require a solution that's better than just to move the code generation into the linker. Instead, one would need to carefully consider the usage of the "shape" of all parameters of a function.


I wonder if we look at it from a too narrow perspective. We use the C ABI because it's the only game in town. We should be aiming for a safe cross language ABI. I'd love to make Rust, C, PHP, Swift, Java and Python easily talk to each other inside 1 process.

It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.

As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.


> It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.

> As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.

Yeah, and we had CORBA. Gnome was originally not a DE - the acronym stood for Gnu Network Object Model Environment or similar.

I programmed in CORBA in the 90s. Other than being slower than a snail on weed, I liked it just fine. Maybe it's time for a resurgence of something similar, but without requiring that calls work across networks.


That is why platforms like Common Language Runtime exist, not only COM.

CLR was going to be the COM Runtime+, and idea was reborn again as Windows team with their anti-.NET bias decided to redo Longhorn in C++, with WinRT.

"Turning to the past to power Windows’ future: An in-depth look at WinRT"

https://arstechnica.com/features/2012/10/windows-8-and-winrt...

It is also how Android IPC and Apple's XPC kind of get into the picture.

The elephant in the room is that FOSS OSes hardly embrace such solutions.


You'll find that all of these languages ultimately build FFI on top of C ABI conventions, though Swift's own internally stable ABI uses a lot of alloca() to place dynamically sized objects on the stack, in a way that's somewhat unidiomatic (the Rust folks are trying to back out of their alloca() equivalent). You can even interface to COM from pure C.

Just in case someone gets funny ideas: GObject is pretty bad. Don't use it for FFI.

> We should be aiming for a safe cross language ABI.

now to simply get everyone to stop what theyre doing so they can rewrite their c code into this new language, shouldnt be too hard i imagine


Dynamic linking is also great for compile time of debug builds. If a large library or application is split up into smaller shared libraries, ones unaffected by changes don't need to be touched at all. Runtime dynamic linking has a small overhead, but it's several orders of magnitude faster than compile-time linking, so not a problem in debug builds.

for developer turnaround time, it is huge. we explicitly do not statically link Ardour because as developers we are in the edit-compile-debug cycle all day every day, and speeding up the link step (which dynamic linking does dramatically, especially with parallel linkers like lld) is a gigantic improvement to our quality of life and productivity.

A common pattern is dynamic linking for development and static linking for production-ready releases.

We considered doing both, but it turned out that the GUI toolkit we use was really, really not designed to be statically linked, so we stopped trying.

Yes, that's a good way to do it.

The C ABI can already be used, it comes with all the existing safety guarantees that C will provide. Isn’t this as good as C?

It is as good as C.

It's also as bad as C.

I'm saying that the chasm to cross is a safe ABI.


There is no existing safe ABI, so this cannot be an adoption barrier.

Lots of reasons why it is. I'll give you two.

1) It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.

2) If you really have 20 libraries calling into one another using C ABI, then you end up with manual memory management and manual buffer offset management everywhere even if you rewrite the innards in Rust. So long as Rust doesn't have a safe ABI, the upside of a Rust rewrite might be too low in terms of safety/security gained to be worth doing


Many Rust core/standard library functions are trivial and inlining them is not really a concern. For those that do involve significant amount of code, C ABI-compatible code could be exported from some .so dynamic object, with only a small safe wrapper being statically linked.

I found c ABI a bit too difficult in rust compared to c or zig. Mainly because of destructors. I am guessing c++ would be difficult in a similar way.

Also unsafe rust has always on strict-aliasing, which makes writing code difficult unless you do it in certain ways.

Having glue libraries like pyo3 makes it good in rust. But that introduces bloat and other issues. This has been the biggest issue I had with rust, it is too hard to write something so you use a dependency. And before you know it, you are bloating out of control


Not really. The foreign ABI requires a foreign API, which adds friction that you don't have with C exporting a C API / ABI. I've never tried, but I would guess that it adds a lot of friction.

Indeed, Victor Ciura from Microsoft DevDiv has several talks on how this is currently an adoption problem at Microsoft.

They have been working around it with DLLs, and COM/WinRT, but still the tooling isn't ideal.


COM is interesting as it implements interfaces using the C++ vtable layout, which can be done in C. Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.

You can also access .NET/C# objects/interfaces via COM. It has an interface to allow you to get the type metadata but that isn't necessary. This makes it possible to e.g. get the C#/.NET exception stack trace from a C/C++ application.


>Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.

DCOM is Distributed COM not Dynamic COM[1].

COM does have an interface for dynamic dispatch called IDispatch[2] which is used for scripting languages like VBScript or JScript. It isn't required for Visual Basic though. VB is compiled and supports early binding to interfaces.

[1] https://en.wikipedia.org/wiki/Distributed_Component_Object_M...

[2] https://en.wikipedia.org/wiki/IDispatch


Ah yes, that's what I was thinking of. It's been a while since I've worked with COM.

Eh, some people can work on moving to Rust, while others work on adding dynamic linking to Rust.

Or maybe we can some how get used to living with static linking. (I don't think so, but many seem to think so in spite of my advice to the contrary!)

Another possibility is to use IPC as the dynamic linking boundary of sorts, but this will consume lots more memory, and as is stated elsewhere in this thread, memory ain't cheap no more.


One particular chasm to keep an eye on, possibly even more relevant than Ubuntu using Rust: When it comes to building important stuff, Ubuntu sticks to curl|YOLO|bash instead of trusting trust in their own distributions.

https://github.com/canonical/firefox-snap/blob/90fa83e60ffef...


When people say "curl|bash", this usually means secondary fetches, random system config changes, likely adding stuff to user's .bashrc

But it's not quite that bad in this particular case - they are fetching pre-built static toolchain, and running old-school install script, just like in 1990s. The social convention for those is quite safer.

(Although I agree, it is pretty ironic that they prefer this to using ppa or binary packaged into deb...)


I don't get it. What's the chasm here?

The "issue" isn't that these new tools from Ubuntu is in Rust, that's almost irrelevant. The issue is that they are not the "standard" tools.

If Ubuntus Rust replacements aren't adopted in other distributions, or only in some of them, we get an even more fragmented Linux ecosystem. We've already seen this with the sudo-rs (which really should be called something else). It's a sudo replacement, ideal a one to one replacement, but it's not 100% and for how long? You can also think of the Curl provided by Microsoft Powershell, which isn't actually Curl and only partially provides Curl functionality, but it squats the command name.

Ubuntu might accidentally, or deliberately, create a semi-incompatible parallel Linux environment, like Alpine, but worse.


Aren't the versions of Rust in stable Linux distributions like, a century old? Or at least they were last I checked what Debian and Ubuntu LTS were distributing. I think it's because they don't like static linking.

Hasn’t the right way to install rust has always been using rust up? I am an Ubuntu user and never once tried apt for rust.

I believe Rust is typically only used through `apt` as a dependency for system packages written in Rust, or for building system packages that are written in Rust, so that they can link against a single shared instance of the Rust Standard Library.

Debian had a new stable release 45 days ago. For now I would imagine things aren't too old there. Although a friend of mine recently ran into some ancient packages on Mint, so maybe Mint/Ubuntu are oddly behind Debian Stable right now for some things.

[flagged]


should we trust someone whos HN account is just as shiny?

“Done software”?

You can curl stuff and run it just gotta have hashes in place.

In theory, yes.

In practice, very rarely. Lots of 'curl | sh' do secondary fetches, and those don't come with hash checks. And even if they come with hash checks _today_, there is no guarantee next version won't quietly remove them.


> And even if they come with hash checks _today_, there is no guarantee next version won't quietly remove them.

...But you could say this about literally every security measure in literally every codebase. At any point, anyone could quietly remove anything that enhances security, or quietly add anything that reduces security. So what's your point?


Yes, technically it's all Turing-complete, but conventions matter, a lot. And Rust, being a mature project, is very likely to follow the conventions.

"static toolchain .tar.gz" means bunch of files you download and manually extract. There may be an install.sh script, but it'll just copy files around, not download extra files. And sometimes install.sh is optional, and tools can be run directly from extraction location.

"curl | bash" means "do whatever developers think gives best experience with minimal prompts", which absolutely means download extra files, but also install system packages, update ~/.bashrc, change system settings and so on.

".run installer" mean interactive installer, Windows-style, often with actual GUI. Often goes into /opt.

"deb file" means "all installed files are managed by apt, and can be examined. /etc conflicts are managed by apt. pre/post install scripts are minimal, and there is a clean uninstall command you can trust to actually work".

You can have deviations - like curl|bash used to pull a deb file or something - but no one likes surprises, so people usually stick to their lanes. If you have .deb files, it might get an officially-specified dependency, more files and maybe a post-inst script, but it won't suddenly start rewriting your .bashrc. Having static toolchain suddenly download files will make many people unhappy, so it likely won't happen either.

(One exception to this rule is enterprise software being packaged into .deb files - Google Chrome surprised everyone when they started to install apt source in their postinst, but many enterprise softwares (cough nomachine cough) do much worse things, like only using apt to unpack their installer file, an dthen running their proprietary install script in postinst)


Unrelated to the language debate, but it seems a lot of people here missed the fact that Rust Coreutils project is licensed under MIT, and I am not sure if I feel that it is the appropriate license for such project. As much as FSF's philosophy has bad PR at times with Stallman, the GPL licenses really do protect open source. Who knows what Canonical would do when all parts of Ubuntu become MIT...

> the GPL licenses really do protect open source.

They did, until the automatic copyright laundering machine was invented. Pretty much every piece of GPL code ever written is now being magically transmuted into MIT/BSD or proprietary code, and the FSF has no solution.


Well, this could be handled with a new anti AI license and I guess crowd funding a massive lawsuit to set precedent against AI companies.

A discussion on licenses will go sideways very quickly. GPL does limit the adoption of software in certain environments. So it really depends on your goals. Do you want an OSS project that will be useable by everyone (including corporations) or do you want to guarantee that the software will always be OSS and guarantee that Corporations can’t benefit from it without contributing back (potentially requiring them to open their own proprietary code).

There’s a lot of moral perspective that people apply to this decision, but not all developers have the same goals for their software. MIT is more flexible in its use than GPL, but doesn’t help ensure that software remains open.


> MIT is more flexible in its use than GPL, but doesn’t help ensure that software remains open.

Sure it does. The original software will always remain open. It isn't like people can somehow take that away.


GPL is copy left, it has a stated goal of encouraging more software to be OSS, including new contributions. That’s what I meant by software remains open. MIT on the other hand can be used in closed source situations. While the original code will remain open, future changes are not required to be open source.

They can use it on locked devices where you cannot replace it though. And then what do you do with the source? Print it and appreciate its beauty?

What evil deeds are you worried about in particular? What are you afraid people will do now that coreutils is MIT?

Does it even need to be explicitly stated? Closed linux userlands.

What does “userland” mean in this context? Closed source linux distributions? Closed source apps? What?

The gpl is generally considered to stop at the process boundary. I don’t really understand what you could do with a bsd licensed coreutils you couldn’t do with a gpled coreutils. You could make closed source Linux software which called coreutils in a child process. But by most readings of the gpl, you can do that today.

I suppose a company could fork coreutils and make it closed source, keeping their special extra command line options for themselves. But … I guess I just don’t see the value in that. You can already do that with FreeBSD. Apple does exactly that today. And the sky hasn’t fallen. If anything, it would be a relief to many if Apple’s coreutils equivalents were more compatible with gnu coreutils, because then scripts would be more freely portable between Linux and macOS.


Just today I found that rust-coreutils makes installing cuda toolkit impossible, related to use of `dd`: https://forums.developer.nvidia.com/t/cuda-runfile-wont-extr...

Do you have more details? The thread you linked was about gzip, not dd.

The .run file is a shell script with a compressed archive appended:

    MS_dd "$0" $offset $s | eval "gzip -cd" | UnTAR t                                                                        
Where

    MS_dd()
    {
        blocks=`expr $3 / 1024`
        bytes=`expr $3 % 1024`
        dd if="$1" ibs=$2 skip=1 obs=1024 conv=sync 2> /dev/null | \
        { test $blocks -gt 0 && dd ibs=1024 obs=1024 count=$blocks ; \
          test $bytes  -gt 0 && dd ibs=1 obs=1024 count=$bytes ; } 2> /dev/null
    }
Edit: this is apparently packaged with Makeself, and various sources report issues with rust-coreutils. For example https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...

There is no need for rust coreutils existence, thus one of the first things to do with a fresh ubuntu (like removing transparent peels on new devices) is to install the real coreutils, sudo and the rest.

> Jon made the provocative comment that we needed to revisit our policy around having a small standard library. He’s not the first to say something like that, it’s something we’ve been hearing for years and years

It sounds to me like you "cross the chasm" a little too early. As a user I don't care about your "chasms" I care about high quality durable systems. This isn't the first time I've heard the "we'll change the std lib later" logic. I've yet to see it actually work.


> This isn't the first time I've heard the "we'll change the std lib later" logic.

I'm not sure what this is referring to, but surely it's not referring to Rust. Adding things to the stdlib is way easier than "changing" the stdlib. And Rust adds stuff to the stdlib all the time, like, go read any blog post for a new release and see that there are usually 10+ new additions to the stdlib, which adds up to hundreds of new additions per year due to the six-week release schedule.


Wake me up when it can do what python or java can do with their.

What does this mean? What is it? Their what?

Really good references to "crossing the chasm" between early adopter needs and mainstream needs. In addition to the Ubuntu coreutils use case, I wonder what other chasms Rust is attempting to cross. I know Rust for Linux (though I think that's still relegated to drivers?) and automotive (not sure where that is).

There are big pushes in pretty much every direction. The projects that really stand out to me are pyo3 (Replace c++ python modules with rust), Dioxus (react-like web framework), The ferrocine qualified compiler (automotive)

I think right now the ecosystem is pretty ripe and with DARPA TRACTOR there are only more and more reasons every day to put rust on your toolbelt.

I am secretly hoping that eventually we break free from the cycle of "hire a senior dev and he likes rust so the company switches" over to hey let's hire some good mid-level and junior rust developers


Are mid level and junior developers being hired anywhere for any reason right now? I don't mean specifically rust developers. I mean software developers.

Sure. There was an article a week or two ago about IBM aggressively hiring juniors. Of course the fact that is noteworthy probably means something in itself....

IBM is also not known for holding on to bodies -- IBM layoff stories abound.

a glut of junior hires now does not a pretty picture make in the long-term sense


Not really, I have been an avid rust programmer for 6 years, I don't think I have ever seen a good junior rust position

If you want to take a look at some of the "big drivers", the Project Goals[1] is the right place. These are goals proposed by the community and the language developers put together, they are not explicit milestones or must-haves, but they do serve as a guideline to what the project tries to put its time and effort on.

[1]: https://rust-lang.github.io/rust-project-goals/


> what other chasms Rust is attempting to cross

Rust is undoubtedly excellent. What tarnishes the picture is a small group of people that rewrite solid pieces of code into Rust, hijacking the original brand names (eg. "sudo") for the sole purpose of virtue signaling. And the later is why the come after the most stable pieces of software that warrant no rewrite at all, like coreutils.

It seems to me that the right approach would be to ignore those and still love Rust for what nice of a language it is.

Unfortunately, Ubuntu is all in on virtue signaling.


I think an issue hindering Rust adoption is ecosystem immaturity. So many crates are pre-1.0, or just basic wrappers around a C library. There are good crates for core things like cryptography, but finding something production-ready for something like SAML is tough.

> There are good crates for core things like cryptography

Speaking of cryptography, I've given up on setting up a Termux Python dev environment on an old Arm tablet, because some package has a dependency on the 'cryptography' module, which apparently requires the whole Rust toolchain to build. In a Python project. On a platform with limited storage. The documentation suggests pre-built binaries are an option, but I couldn't figure it out the last few times. It left me with a bad taste in my mouth.


Sometimes it works if you install an old version of the cryptography package from before they introduced Rust.

I wouldn't read too much into pre-1.0 versions. Folks take SemVer pretty seriously, and that makes some folks reluctant to declare v1.0 even when a crate has been in use and "mostly stable" for years. There can also be compatibility issues with a 1.0 bump if a crate's types are common in public APIs, e.g. the `libc` crate. I'm a big fan of the curated list of crates at blessed.rs, or honestly just looking at download numbers. (Obviously not a perfect system.)

IME, a 1.0 version is usually when a project starts taking backwards compatibility seriously. A pre-1.0 library may be plenty stable enough in terms of bugs, but being pre-1.0 means they’re likely going to change their mind on the API contract at some point.

That is the major problem for me… I don’t actually mind that much if a library has bugs… those can always be fixed. But when a library does a total 180 on the API contract, or removes things, or just changes their mind on what the abstraction should be (often it feels like they’re just feng shui’ing things), that’s a major problem. And it’s what people mean when they say “immaturity”: if I build on top of this, is it all going to break horribly at some point in the future when the author changes their mind?

People often say “just don’t update then”, but that’s (a) a sure fire way to accumulate tech debt in your codebase (because some day may come when you must update), and (b) you’re no longer getting what could be critical updates to the library.


gettext took over 30 years to get to 1.0 last month.

They are vibe-translating C++ into Rust to change the licence.

Replacing solid code with vibe code basically, in the name of safety.

sudo-rs for example which is specifically mentioned has a drastically worst safety record than the C sudo.


But rust!! and AI!! and safety!!! /s

This noise around rust is aggravating. I loved Rust. Learned and used it from the first book but this garbage propaganda and MIT license thing is annoying so much


.NET has a _huge_ platform library and you know what? It’s a pleasure. So many things are just the standard way of doing things. When things are done weirdly, you can usually get a majority in favour of standardising it.

Yes, there’s always a couple of people who really push the boat out…


Yeah, IMO the small standard library in Rust is a big mistake, one of the few the language has made. When push comes to shove the stdlib is the only thing you can count on always being there. It's incredibly valuable to have more tools in the stdlib even if they aren't the best versions out there (for example, even if I normally use requests in Python urllib2 has saved my bacon before), and it doesn't hurt anything to have them there.

I don't think the situation is that comparable to python, since in python the library has to be present at runtime. And with the dysfunctional python packaging there's potentially a lot of grey hairs saved by not requiring anything beyond the stdlib.

With Rust, it's an issue at compile-time only. You can then copy the binary around without having to worry about which crates were needed to build it.

Of course, there is the question of trust and discoverability. Maybe Rust would be served by a larger stdlib, or some other mechanism of saying this is a collection of high-quality well maintained libraries, prefer these if applicable. Perhaps the thing the blog post author hints at would be a solution without having to bundle everything into the stdlib, we'll see.

But I'd be somewhat vary of shoveling a lot of stuff into stdlib, it's very hard to get rid of deprecated functionality. E.g. how many command-line argument parsers are there in the python stdlib? 3?


On the other hand, a worse implementation in the stdlib can make it harder for the community to crystalize the best third-party option since the stdlib solution doesn't have to "compete in the arena".

Go has some of these.

Maybe a good middle-ground is something like Rust's regex crate where the best third-party solution gets blessed into a first-party package, but it is still versioned separately from the language.


Non-system programmers like to trivialize choices of system programmers yet again. .NET is a GC platform running on a virtual machine. Bytecode compatibility and absolute performance are not that big of a deal on such platforms. You cannot / shouldn't run .NET on deeply embedded systems and bare metal where You want to strip as much standard library as possible and want as little magic in standard library as possible. In a language with big hosted system assumption this causes to runtime to be split and forces developers to define big API boundaries.

The use case of languages like Rust and C++ is that you can use the same compiler to write both bare-metal unhosted code (non-std for bootloaders, microcontrollers and kernels) and hosted code (uses std structures). As a system programmer that crosses the edge between two environments, I would like to share as much code as possible. Having a big standard library with hosted system assumption is a huge issue. In those cases you want the language works 99% the same and can use the same structs / libraries. Sometimes you also want to write non-std code on hosted environments for things like linkers.

Rust isn't even at the level of maturity of C yet in this regard. Rust's std / core is too big for really memory limited microcontrollers (<64 K space) and requires nasty hacks with weak ABI symbols to make things sane.

Having a huge baggage of std both causes issues like this for the users and also increases maintenance burden of the maintainers. Rust really wants to break its APIs as little as possible and small standard library is a great way to achieve that. C++ suffered a lot from this and it hampered its adoption for C codebases.


Some of these non-system programmers are ex-system programmers, coding since the mid-80's that foundly remember the days when C and C++ compilers had rich frameworks that would compete in features with what .NET and Java later came to be.

Unfortunelly too many modern system programmers never lived in that era, and are completly off on how nice the whole development experience could be like.


I think those two things are orthogonal. I'm not against somebody bundling up nice Rust libraries and providing a pre-install package or providing nice GUIs around (like Borland used to do and Qt still kind of does). Or an OS providing a nice set of libraries.

However, the standard library of a systems language has a special relationship with the compiler. This is the case for C and C++ where the compiler and the standard library also has a special relationship with the platform like GNU or musl with Linux, or MSVC and Windows. It makes changing APIs or modernizing infrastructure almost impossible without creating an entire new OS and porting all compilers and standard libraries to it. Moreover the newer C++ standards actually force you to define such a relationship (with std::initializer_list and threading stuff). It is basically impossible to make an OS-agnostic C++ compiler that doesn't leak its and platform's internals to the user.

Luckily Rust mostly abstracts around the platform-compiler boundary and its standard library so the platform dependencies are implementation details. Unlike C and C++, one can write Rust without caring about how the underlying OS does ABI. However, Rust compiler and Rust std has a special relationship. `Box` can only be defined as part of Rust standard library that's compiled together with the Rust compiler itself. Its special relationship is kind of a blocker for -Zbuild-std and std-aware Cargo which prevents size-optimizing std for embedded systems. Without that magic (i.e. compiling the compiler itself, or worse bootstrapping it) you cannot independently create a `Box`.

I want this kind of library to contain as little as possible since it is convenient to define these kinds of relationships and rely on magic. Modern C++ has too much such magic. Rust is mostly on a correct path with std, core, alloc etc. separations. These kinds of boundaries make it possible to share as much code as possible with many libraries without finding hacky ways around std (which you have to do with C++).

This doesn't mean that I wouldn't appreciate more actual functional libraries maintained by Rust Foundation-funded people and be part of the project or even easily installed. However those libraries should be effortlessly exchangeable. I think current Cargo ecosystem achieves this mostly. However I would appreciate a more curated Cargo repository that contains only a limited set of really well maintained packages (similar to Maven's repos in Java world).


Why didn't those artifacts/relics survive into the modern era?

There's also something about the early 00s that made software developers go crazy in Java land that they decided to over engineer software for no real benefit and come up with overly complex architectures that don't really address the core issues but rather imagined issues that turn out to not be that important in practice.


They did survive, Qt, VCL, FireMonkey, POCO, but the dark energy of the Electron force it too mighty.

Also in the 2010's we had the rise of scripting languages, thus we have a whole generation that never used compiled languages and are now re-discovering systems programming via Rust, Zig and co.

A history lesson, before OOP, there was Yourdon Structured Method, and plenty of C enterprise architects jumped into it.

The GoF book used Smalltalk and C++, predating Java by a couple of years.

The Booch Method used C++, and predates Java for a decade.

Ah and there was that whole operating system written in an OOP C dialect, including its drivers, NeXTSTEP, which also survives to this day, with more consumer deployments than the Year of Desktop Linux.


> After all, they said, why not just add dependencies to your Cargo.toml? It’s easy enough. And to be honest, they were right – at least at the time.

They weren't? Isn't it obvious that it's not easy because the challenge isn't literally adding dependencies to a file, but before that - finding/evaluating which of the alternative dependencies to add?

But anyway, that's a very shallow and wrong summary of what happened, the link itself has plenty of grounded non-hateful objections to the proposal, which would be as valid now as they were then.


I've been a fan of all rust-based utilities that I've used. I am worried that 20+ (??) years of bug fixes and edge-case improvements can't be accounted for by simply using a newer/better code-base.

A lot of bug fixes/exploits are _CAUSED_ by the C+ core, but still... Tried & true vs new hotness?


Don't hate me for this, but... is 20 years of Rust really new?

https://en.wikipedia.org/wiki/Rust_(programming_language)

I do get what you mean, but Rust has been baking for a decade, finally took off after 10 years of baking, and now that is been repeatedly tried and tested it is eating the world, as some developers suggested it could eventually do so. I however do think this shows a different problem:

If nobody writes unit tests, how do you write them when you port over projects to ensure your new language doesn't introduce regressions. All rewrites should be preceded by strong useful unit tests.


Ideally, but if a project wasn't written with tests at the time then finding a working time machine can be a challenge. If you try to add them later you won't capture all the nuance that went into the original program. After all, if the implementation code was expressive enough to capture that nuance, you'd already have your test suite, so to speak. Tests are written to fill in the details that the rest of the code isn't able to express.

Tests are written for various goals: integration testing, to prevent regressions, and in the same effort to prevent regressions to protect mission critical / business logic code. If all those nuances are captured by good tests, you arguably have "100%" test coverage, you don't need to test every single line of code ever written to have 100% test coverage in my eyes. But then when you go to translate your project to a new language, you port the tests first, then test against those tests.

This is my personal belief on this anyway.


he has no answer for this

But the 90s was only 20-years ago!

lol, you got me. Stupid old brain not calculating time correctly.


I was born in 1990 so I get it! I still say 21 when people ask me how old I am... Aka how old do I need to say I am to be able to drink alcohol LOL I don't drink that often mind you. I just don't really think about my age a whole lot...

[flagged]


Rust has editions for strong stability guarantees, and has had them for nearly a decade i believe. Besides, tech backing has grown way past the risky point.

FWIW, the GP comment's claim that you're lucky if you can compile 2-year-old code is exaggerated, but so is yours. Rust does not offer "strong stability guarantees". Adding a new method to a standard type or trait can break method inference, and the Rust standard library does that all the time.

In C or C++, this isn't supposed to happen: a conformant implementation claiming to support e.g. C++17 would use ifdefs to gate off new C++20 library functions when compiling in C++17 mode.


> and the Rust standard library does that all the time.

I don't doubt this is true, but do you have an example? I think I haven't run into a build breaking like this in std in like maybe seven/eight years. In my experience breaking changes/experimental apis are typically ensconced in features or gated by editions.

Granted, it'd be nice to be able to enforce abi stability at the crate level, but managing that is its own can of worms.

I did find that the breakage rfc allows for breaking inference, which tbh seems quite reasonable... inference is opt-in.


Almost every major release of rust stabilizes new library methods. For example, the latest major release (1.93) stabilized Vec::into_raw_parts. This isn’t gated by an edition. So if you had a trait with a method “into_raw_parts” which you had defined on Vec, after updating to 1.93 or later your code will either fail to compile, or start running different code when that method is called.

Sorry, I meant to write “method resolution”, not inference. This isn’t the same issue as type inference (though indeed, stdlib changes can break that too)


Adding a new method can change the behavior of C++ code as well due to templates. Does the standard library never add new methods because of that?

> Adding a new method can change the behavior of C++ code as well due to templates.

Yes, but the code can be gated off with ifdefs to only be present when compiling for a particular version of the standard.


Yes. All the time. Subscribe to the std-proposals mailing list and you'll see so many obvious improvements get rejected due to ABI compat guarantees.

> Rust does not even have a specification

Neither do most programming languages.

> You are lucky if current version, compiles two years old code!

That's not true.


> Neither do most programming languages.

Rust is trying to replace C++ and C in particular. Those languages have specifications.


> Neither do most programming languages.

My favorite nemesis and friend JavaScript does, which always gives me a laugh. Such a mess of a wonderful language.


You and me both; never change you beautiful bastard of a language <3

> years of bug fixes and edge-case improvements can't be accounted for by simply using a newer/better code-base.

Partially is in fact true: Just because the Rust use a better type system (after ML) + better resource model (aka borrow checker), and if you are decently good, you eliminate, forever!, tons of problems.

It can't solve things that arise by complex interactions or just lack of port subtle details like in parsing poor inputs (like html) but is true that changing the language in fact solve tons of things.


> but is true that changing the language in fact solve tons of things.

*so long as you dont use unsafe rust

**unsafe rust required for systems programming

***rust is a systems language

****does changing a default actually solve anything?


Is Rust still considered "new hotness"? I feel like the industry has long-since moved past that perceived "blocker".

It seems like Rust is now just the default in all manner of critical systems.


No, it's Node.js. I kid you not. I keep coming across Node in places where I really would not expect it.

Rust - no. sudo-rs not hotness, but relatively new.

apart from safety critical systems.... whats that all about? youd think with all the safety guarantees itd be good for something like that

I think it's worth trying!

It absolutely is worth trying. I look forward to it being battle tested and proven. I just don't want to be the one doing the testing.

rg, fzf, and several others that I can't think have proven to me that rust is the direction going forward.


Rg/grep is kind of like make/ninja imo.

It’s not so much about the language as it is about the hindsight/developer/project


If your issue is the license used then your issue isn't the language itself. Someone who wrote a coreutils replacement in D or something and licensed it as MIT you would still have an issue.

Putting aside Ubuntu's/Canonical's failed custom projects (e.g. upstart), they have a history of shipping software that isn't ready and turning the community against it, with pulseaudio being the headline example. I'm concerned that the upcoming Ubuntu LTS (which is only 2 months away) will add rust to that list.

Ubuntu used to be the distro to go do, used to.

- SNAP which is only managed and supported by them

- Tried to reinvent the wheel with sudo-rs

- They are heavily focused into cloud, servers and business

- Following the Rust hype train

I used Ubuntu for 13y or so, it is a Windows within Linux world. Bloated, kernel panic, heavy, privacy issues.

Debian still the king to be used as servers, Mint Cinnamon is the king for desktop, gaming, video editing, 3D design, coding,it just works.


> - SNAP which is only managed and supported by them

> - Tried to reinvent the wheel with sudo-rs

Upstart, Mir, Launchpad, Bazaar, Unity, Juju...


Nowadays I only use Windows, Android and WebOS privately, macOS at work when assigned an Apple device, and cloud specific Linux distros.

Also Solaris and Aix are my favourite UNIX flavours.

The time to write M$ on my email signature during the 1990's is long gone.


> - Tried to reinvent the wheel with sudo-rs

reinvent how? sudo-rs and a bunch of others are maintained by: https://trifectatech.org/ a non profit registered in Netherlands


> Mint Cinnamon is the king for desktop

Mint LMDE great too. Builds upon a Debian base, instead of Ubuntu.


The author refers to a few things that he thinks will appeal to the "early majority," but I feel like that's a weakness of the article. Is the author part of the "early majority?" (doesn't seem like it). Does he have the same problems that they have? How does he know?

He is the Rust project lead, and the Rust project has been doing quite a bit of user, adopter, and non-adopter interviews over the past few years.

Sudo no longer supporting path inheritance kinda sucks

> They are “looking to minimize discontinuity with the old ways”

Perhaps one of the best ways to achieve that goal is to not introduce any discontinuity? Like, take coreutils. It's one of the most stable pieces of Linux infrastructure. It's as solid as it gets. No one asked for rewrite of those in any language. No one wanted a rewrite. No one needed a rewrite. The rewrite serves no purpose[1].

[1] Credit where it's due: this rust slop prompted a creation of test suite for coreutils, which is truly a great achievement, hands down.


it means, no more sudo -E it means no more stdbuf piping

So far mostly that they pushed untested tools that even authors didn't think are production ready on ususpecting users.

Ubuntu was always "let's just fuck up what just works in Debian" but this is another level, I have no idea why they are rushing it

Distros using Ubuntu as base should reconsider.


Debian stable just works now. I am not sure why a layer smeared over the top of these is a win for anyone. It's all bloat.

it means it is using rust

I don't care that the non-gnu coreutils are using rust. I care that they aren't GPL licensed.

This means Canonical can offer proprietary patches on top of these packages and sell them as part of their "enterprise" offerings and this gives me the ick.


agreed!

a few weeks ago it was all about Zig, now it's all about Rust, Clojure or Elixir next?

Rust was first

Why am I hearing about Rust a lot these days? Did anything significant happen?

What do you mean by “these days”? To me, it seems like rust is a pretty constant factor on HN for at least two years now.

It feels to me like Rust has been pretty big on HN ever since the 1.0 release in 2015...

Most of the platforms were successfully petitioned to have rust sdk mandatory added so that rust code can be added to the platforms. The previously situation was rust was not allowed because the external dependency of the rust sdk was blocked.

Note that the rust having no stable api is not fixed, so I think there's a bunch of internal systems on each platform to hard lock the rust dependencies across multiple rust users.

There's some friction between platform packagers and the code that the author wrote exactly as it was written.


there has been a few adopters of rust... linux formally choosing it for some of their systems being the most notable recently(maybe a few months ago).

AI has made it exceptionally easy to program with.

I've switched to using Rust from Python simply because of AI development


From my experience at our startup, AI is still pretty shit at Rust. It largely fails to understanding lifetime, Pin, async, etc. Basically anything moderately complex. It hallucinates a lot more in general than JS for comparable codebase size (in the 250k lines range).

Are hallucinations in code generation still a problem? I thought with linters, type checkers, and compilers especially as strict as Rust, LLM agents easily catch their own mistakes. At least that's my experience: the agent writes code, runs linters and compilers, fixes whatever it hallucinated, and I probably get a working solution. I tell it to write unit tests and integration tests and it catches even more of its own mistakes. Not saying that it will always produce code free of bugs, but hallucinations haven't been an issue for me anymore.

Indeed. With AI lifting legacy code bases into Rust got a whole lot easier, and purging the blight of C from the world, excepting the most deeply embedded of applications, got a whole lot closer.

> With AI lifting legacy code bases into Rust got a whole lot easier, and purging the blight of C from the world, excepting the most deeply embedded of applications, got a whole lot closer.

You probably don't realise this, but AI written Rust (or any language) will have much more undefined behaviour than human-written C.

AI coding brings non-determinism to every language; with AI, now every language can have Undefined Behaviour.


Really? You think AI writes better Rust than Python? Can you give me some examples? I strictly code Django, and Claude Code is really good at following my lead with it.

Rust has a very strict type system and an ecosystem that often utilizes the type system well.

Many things that would only be caught at runtime in other languages are caught at compile time in Rust, making coding agents iterate until things compile and work well.

Rust also has great error messages, which help the agents in fixing compilation errors.


The mandatory error handling of Rust is also an amazing feature for catching bugs at compile time. In Python you never know which exceptions might occur at any given time. That's something I completely underestimated in its usefulness, especially now that I have a programming buddy with infinite stamina handling all these errors for me.

The compile errors are great. I can change one function signature and have my output fill up with compile errors (that would all be runtime errors in python). Then I just let claude cook on fixing them. Any time you have to run your program and tell claude what’s wrong with it you’re wasting time, but because claude can run the compiler itself and iterate it’s much more able to complete a task without intervention.

Actually, I noticed it with C compiling. Compiler errors made it faster to debug the code.

I think relative to the typical Rust code it likely does worse than AI relative to the typical Python code. But due to the compiler, it's possible you might get more correctness out of AI-generated rust code on average.

I can't give you examples, but my experience is that AI does very well with Rust except for cases where a library has a constantly changing API/ has had recent breaking changes. I find that AI does extremely well at "picking up" a Rust codebase, I suspect due to the type information providing context but I couldn't say.

I think the argument is more that working rust code is better than working Python, and AI assistance makes it more tenable for average developers to successfully produce working rust code, and in particular is helpful for navigating the gap between "code written" and "code compiling" (eg why is the borrow checked mad at me).

Even if it writes the same or even somewhat worse rust than python, assuming the output is the same you are likely to get a speedup + a better distribution story.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: