I remember back when ALSA was a hot new thing on Linux, and it kinda sorta solved the "only one app can play things at the same time" problem that existed with the traditional OSS.
> every couple of years when somebody releases a yet another Linux audio daemon.
You seem to be stuck in, oh, maybe 2008.
There hasn't been "yet another Linux audio daemon" in more than a decade. JACK came along in the early 2000's (I wrote it), and PulseAudio in 2004 or thereabouts. All that nonsense with esd, artsd etc. was over because the aughts were over.
The GP is about pipewire, and was complaining that it's "just another linux audio daemon". I concluded that it wasn't necessary to mention it in the context of my comment.
My problem is that all of these are incompatible. Woe be to whomever has an OSS app, an ALSA app, a JACK app, and a PulseAudio app, all of which want to run at the same time.
I don't get why this was never fixed at the kernel level. You have onions of layers.
PulseAudio never intended to replace Jack. And in fact, that's one of its biggest failings.
FWIW, I have recently switched from PulseAudio to PipeWire. I remembered how rough the switch from plain ALSA to PulseAudio was, and thus was utterly baffled how smooth the transition to PipeWire is. I just uninstalled PulseAudio, installed PipeWire, rebooted, and everything just worked, down to the Bluetooth headset that I never quite got to work well with PulseAudio.
Yeah, I had the same experience. With PipeWire it (mostly) just works, and reliably. The only thing I'm missing is always presenting the microphone as an usable device and only switching codecs/profile when the microphone is being used.
Pipewire is not finished at this point. It is being deployed "early" by some distros, apparently in the belief that it will help find bugs and help them being fixed.
Pipewire implements both ALSA psuedo-devices, the JACK API and the PulseAudio API. You only need a single daemon to address all possible audio I/O needs, something that PulseAudio could not do.
What I don't like is that audio systems on Linux are too complicated. For example, all that ALSA needs to do is to provide exclusive access to a device (only one program at a time) and send samples into it unmodified (no bit depth change, no resampling, no mixing, no volume adjustment etc). But open source developers cannot do such a simple thing. They added thousands of unnesessary features, hundreds config options and turned ALSA into a monster. When it should be just a dumb backend for PulseAudio.
It didn't, though, initially. On hardware that supported multiple streams, it worked (assuming the driver for that hardware knew how to do it), but for hardware that didn't, you had to use the ALSA dmix plugin, which for the longest time wasn't enabled by default because it was the cause of some audio artifacts.
OSS did not have the ability when it was removed from the kernel and replaced by ALSA, which did have that ability already.
There were (and are) many problems with the OSS API, a number of which continue to exist in the ALSA API too.
All serious audio APIs (plugins, or native audio I/O) use a pull model at the bottom of the stack, so that i/o is driven by the hardware. You can layer a push model (where software does blocking read/write calls whenever it wants) on top of that, but not the other way around.
And yes, OSS and ALSA both implement select/poll etc. for audio devices, but do not enforce the use of this model, resulting in the 2000s being filled with Linux (and *BSD) apps that couldn't function correctly without oodles of buffering.
Meanwhile, OSS on FreeBSD just worked.