My guess is that Zig's design choices are hitting "a sweeter sweet spot" for systems programming that resonates with many engineers reading HN.
At least for TigerBeetle [1], a distributed database, the story was strong enough even last year that we were prepared to paddle out and wait for the swell to break, rather than be saddled for years with undefined behavior and broken/slow tooling, or else a steep learning curve for the rest of the project's lifetime. We realized that as a new project, our stability roadmaps would probably coincide, and that Zig makes a huge amount of sense for greenfield projects starting out.
The simplicity and readability of Zig is remarkable, which comes down to the emphasis on orthogonality, and this is important when it comes to writing distributed systems.
Appropriating Conway's Law a little loosely, I think it's more difficult (though certainly possible) to arrive at a super simple design for a distributed consensus protocol like Viewstamped Replication, Paxos or Raft, if the language's design is not itself also encouraging simplicity, readability and explicitness in the first place, not to mention a minimum of necessary and excellent abstractions. Because every abstraction carries with it some probability of leaking and undermining the foundations of the system, I feel that whether we make them zero-cost or not is almost besides the point compared to getting the number of abstractions and composition of the system just right.
For example, Zig's comptime encouraged a distributed consensus design where we didn't leak networking/locking/threading throughout the consensus [2] as is commonly the case in many implementations I've read, even in high-level languages like Go. It made things like deterministic fuzzing [3] really the natural solution. People who've worked on some major distributed systems in C++ have commented how refreshing it is to read consensus written in Zig!
Zig also has a different/balanced/all-encompassing approach to safety that resonates more with how I feel about writing safe systems overall: all axes of safety as a spectrum rather than as an extreme (this helps to prevent pursuing one axis of safety at the expense of others), safety also including things like NASA's "The Power of 10: Rules for Developing Safety-Critical Code" [4], assertions, checked arithmetic (this should be enabled by default in safe builds, which it is in Zig), static memory allocation, and compiler checked syscall error handling, the latter of which is really the number one thing by far that makes distributed databases unsafe according to the findings in "An Analysis of Production Failures in Distributed Data-Intensive Systems" [5].
While we could certainly benefit from the muscle of Rust's borrow checker in places, it makes less sense since TigerBeetle's design actively avoids the cost of multi-threading, with a single-threaded control plane for more efficient use of io_uring (zero-copy when moving memory in the hot path), plus static memory allocation and never freeing anything in the lifetime of the system. The new IO APIs like io_uring also encourage a future of single-threaded control planes (outsourcing to the kernel thread pool where threads are cheaper) since context switches are rapidly becoming relatively more expensive. Multi-threading for the sake of I/O is less of a necessary evil these days than it was say 5 years ago.
At some point, the benefits didn't outweigh the costs, and we had to weigh this up. In the end, it came down to simplicity, readability and state-of-the-art tooling.
At least for TigerBeetle [1], a distributed database, the story was strong enough even last year that we were prepared to paddle out and wait for the swell to break, rather than be saddled for years with undefined behavior and broken/slow tooling, or else a steep learning curve for the rest of the project's lifetime. We realized that as a new project, our stability roadmaps would probably coincide, and that Zig makes a huge amount of sense for greenfield projects starting out.
The simplicity and readability of Zig is remarkable, which comes down to the emphasis on orthogonality, and this is important when it comes to writing distributed systems.
Appropriating Conway's Law a little loosely, I think it's more difficult (though certainly possible) to arrive at a super simple design for a distributed consensus protocol like Viewstamped Replication, Paxos or Raft, if the language's design is not itself also encouraging simplicity, readability and explicitness in the first place, not to mention a minimum of necessary and excellent abstractions. Because every abstraction carries with it some probability of leaking and undermining the foundations of the system, I feel that whether we make them zero-cost or not is almost besides the point compared to getting the number of abstractions and composition of the system just right.
For example, Zig's comptime encouraged a distributed consensus design where we didn't leak networking/locking/threading throughout the consensus [2] as is commonly the case in many implementations I've read, even in high-level languages like Go. It made things like deterministic fuzzing [3] really the natural solution. People who've worked on some major distributed systems in C++ have commented how refreshing it is to read consensus written in Zig!
Zig also has a different/balanced/all-encompassing approach to safety that resonates more with how I feel about writing safe systems overall: all axes of safety as a spectrum rather than as an extreme (this helps to prevent pursuing one axis of safety at the expense of others), safety also including things like NASA's "The Power of 10: Rules for Developing Safety-Critical Code" [4], assertions, checked arithmetic (this should be enabled by default in safe builds, which it is in Zig), static memory allocation, and compiler checked syscall error handling, the latter of which is really the number one thing by far that makes distributed databases unsafe according to the findings in "An Analysis of Production Failures in Distributed Data-Intensive Systems" [5].
While we could certainly benefit from the muscle of Rust's borrow checker in places, it makes less sense since TigerBeetle's design actively avoids the cost of multi-threading, with a single-threaded control plane for more efficient use of io_uring (zero-copy when moving memory in the hot path), plus static memory allocation and never freeing anything in the lifetime of the system. The new IO APIs like io_uring also encourage a future of single-threaded control planes (outsourcing to the kernel thread pool where threads are cheaper) since context switches are rapidly becoming relatively more expensive. Multi-threading for the sake of I/O is less of a necessary evil these days than it was say 5 years ago.
At some point, the benefits didn't outweigh the costs, and we had to weigh this up. In the end, it came down to simplicity, readability and state-of-the-art tooling.
[1] https://www.tigerbeetle.com
[2] https://github.com/coilhq/tigerbeetle/blob/main/src/vsr/repl...
[3] https://github.com/coilhq/tigerbeetle#simulation-tests
[4] https://web.cecs.pdx.edu/~kimchris/cs201/handouts/The%20Powe...
[5] https://www.usenix.org/system/files/conference/osdi14/osdi14...