One challenge is that they have a ton of usage under their free tier, especially by free and open source projects which have near zero budgets. Its an artificial economy of projects that cannot pay for their own usage.
Another challenge is that the GitHub Actions paid tier is already very expensive, the quality of service is poor, and they have major security challenges. They could load shed by raising prices, driving customers to other platforms, but they already charge 10x what others charge (https://runs-on.com/pricing/#runner-pricing, https://www.ubicloud.com/docs/about/pricing). Anyone using GitHub Actions at scale would be somewhat price insensitive already.
Interesting. So one way to interpret current situation is that Github is "trapped" by its open source offering. This will likely have implications soon on what they do or the direction of open source...
I recently saw a submission here that does that, by essentially implementing GC in Rust. It is not beginner material though. https://kyju.org/blog/tokioconf-2026/
Edit: also the simplest way how to do cyclic structures is to heap-allocate via Box and leak memory. Box::leak
This is also mentioned in the linked article.
Very interesting! Im curious, how does this work, it binary patches glibc allocator? AFAIK custom allocators are only in nightly and require generics in the form Vec<T, A>
Mine too! Rust is my favourite language right now.
The complications begin with async. Outside of async it’s a beautiful world.
With async, you tend to get locked down on library ecosystem level, with the dominant approach being work-stealing Tokio runtime, which I disagree with the fundamental design, after doing a lot of research. However, the gravity field of Tokio is strong. To escape it , I had to make a copy of popular crates and dig with LLMs to rewrite them to be free of work-stealing.
for now, my benchmark is a simple wrk test with varying connection counts (100, 256, 1000). Web server components like the parser, i/o, and the server itself have their own independent benchmarks (rust built-in), along with some e2e tests (mostly written in Bash).
We might be interested in the technology (not as a VC), if it shows meaningful improvement (>2x) over what we currently do.
If you could show how it compares with a simple replicable baseline, maybe something like C program that just accepts a connection? Lots of these things are hardware-dependent.
Right now we use monoio and have a draft benchmark with speed. Happy to continue talking over e-mail. Should I write to you?
Hi, nice motivation! I’ve built async runtime driven by clocks on top of monoio. You can drive each thread at different speed, to simulate a distributed system faster than real time. Our motivation is outlined here: https://minfx.ai/reliability.html
It’s not published yet, as it’s a bit wired to our internal systems at the moment. But happy to chat :)
reply