Most of the things that go wrong, don't happen in the kernel. Kernel development is pretty awesome and although there is always room for improvement I can't see something revolutionizing the work done there.
You might say: Let's rewrite it in Rust to ensure memory safety - well, rust support is on the way and while I think it is a good thing, it will not fix everything.
In my opinion writing a completely new kernel would not benefit from the huge effort and experience that has gone into the current kernel - so it must be inferior in many ways for a long time.
If you ask for the things Linux needs the most "polish", in my opinion this would be mostly about the desktop and the community of hating each others work. Many projects are fighting each other instead of working together to fix things, but this also might be a good thing in terms of "competition". The things I would work at are:
UI
Security
Apps
And this also is all on the way. The current version of Fedora for example shows many improvements for the daily desktop user. Wayland / Gnome is getting ready to really be acceptable and flatpak does so for apps. Only security is a bit of a todo... but it will always be.
I remember the "Linux Touchpad like Mac" article that made me believe again, that people really care... and nowadays with libinput, gnome and libinput-config, the touchpad really works... it is still hack-ish, not anywhere like macOS, but pretty good, and I'm a nitpicker on this.
Linux as a desktop has come so far, KDE is great for people who prefer the windows-like workflow, and Gnome for people who prefer a mac-like workflow.
(Keyword there is like, for everyone who is about to nitpick)
The dumbest thing is as you said the infighting over the same things.. that and the "This is better because it's harder, and thus I'm better than you because I'm using the harder option". Those need to go away. Like Arch could offer the user a choice at the start.. do a manual install or use arch-install (the guided option).. that would go a long way.. also put the AUR helpers in the main repo... Why i have to manually install Yay every new install... It's beyond me.
Flatpak is amazing, Gnome is amazing, Linux app support now is amazing. Proton is amazing.
Seriously it's just remove the two barriers of entry
1. The "Linux is hard" rhetoric which turns users away
2. The easier install of applications (this is happening with flatpak and other things)
I personally went to Fedora. I'm not a fan of Manjaros very fast package testing, there's apparently issues there. I'd stay with mainline arch.
That said. My point wasn't that Arch is hard or something. But that Arch could do some small things to make it much easier to have wider adoption... Its an example. There's things that other distros could do to solve this too.
The amount of "you use Ubuntu? That's a babys distro, I use Gentoo because I'm smarter!"
120% agreed. Much like you I want distros to be a bit more ergonomic and easy to start with, and that's why I chose an Arch derivative because I wanted Arch plus some convenience on top.
RE: short / shallow testing, that's likely true but I interpreted it as: "we will monitor if a breaking bug is posted somewhere and if that doesn't happen in 2-4 weeks then we deem the package good enough".
While many might find that not professional, I don't care as much. I need a fast-moving distro for my dev machines. I can't afford to be several versions behind on a number of packages.
For my servers it's still 50/50 because kernel updates have legitimately improved the performance of one of them and I really wouldn't want to wait for that update for some 2-3 years like Debian does. But then again I'm not protected against sudden breakage either so maybe there's a distro out there that sits in between the two extremes.
Hey, if you're ok with the packages in Manjaro that's fine... Kind of my whole point of this thread is, you do you... If you like it, that's all that matters.
As for servers.. that's where I kind of shift from bleeding edge to stability. I have a homelab, and it's a mix of debian and FreeBSD VMs. I love both of those options for servers.. debian because it supports cloudinit a little better and is stable, and FreeBSD when i don't need all the cloudinit support..
As for fedora as an OS
Desktop yes, Server ... Probably not. There's better choices out there. Its good yeah. But it just seems to be a bit bloated for my needs.
Fedora Desktop is really really good. Its not Arch levels of bleeding edge, (it can be with Fedora Rawhide) but it's close enough. Especially when you enable the repos that have the latest packages.
I'd encourage you to check out EndeavorOS. While they did have a breaking update of their own, I have much higher confidence in them delivering a consistent experience than Manjaro.
Can you summarize the differences and why are they favoring Endeavor?
I only heard some hand-wavy claims about Manjaro maintainers being non-constructive or some such, and I openly admit that I find those claims unconvincing.
I can, but my info may be slightly out of date, but I was in the space.
Manjaro's head is cluless, and that's putting it nicely. More than once their SSL cert expired, and rather than fix it, just told everyone to set their system clock back.
He effectively stole a bunch of code and s/myname/manjaro the entire codebase. To the point I had to add serverside code blocking their users hitting my server.
They did not have(and may still not have) anyone who knew anything about the linux codebase, but still decided to hold packages in the name of 'security.' So zero days took much longer to fix. And introduced bugs still got introduced, just later. It's not like RedHat or Ubuntu who have real professionals checking things... it's just a hold.
Endeavor guys were super nice. They contacted us asking to work together, telling us their goals, etc. By that point we were mostly moved onto other things so couldn't offer much help. Endeavor's system is much smarter and safer. It puts you on Arch repos, and doesn't hold packages. That's ideally what an Arch installer should do, no matter what the neckbeards on arch forums squeal about.
Libinput has genuinely been such a subpar experience for me that I'm currently still running the synaptics driver. One thing that bothers me is that not all applications support kinetic scrolling with libinput, but they do with synaptics. Configuration is fairly limited with the new driver, and I've never got it to feel right. It's been a few years since I last tried however, so I don't know if things have changed, but from what I gather it's not changed a lot over the last decade.
Try libinput-config[1] - it's the missing config option for libinput. It works via hook and is far from perfect, but now the kinetic scrolling is really acceptable...
What can be improved:
Having a hook daemon is kind of crazy to overcome the missing config
Two finger (rightclick) stop scrolling does not work, one finger (leftclick) does
touchpad accuracy could be better
I think the linux dev community and foss developers in general could be much more aggressive about going after government grant money. Open source software is used in everything from biomedical research to fundamental science, to industry, to defense. Somehow, these key shared resources are still run on a budget that's barely a rounding error by comparison.
I know writing grant proposals sucks and everyone wants to be having fun developing software, but if you pull in enough money initially you can pay people to write more grants and make things self sustaining in terms of funding. For how essential linux is to modern society there should be more paid people working on it full time.
Once it becomes "about money" that controls the enterprise, even when it was originally about software. Sure there's some projects that avoided that but its a tar pit and I can't fault anyone for avoiding it.
Is it, though? I find a lot of security in the knowledge that Ubuntu has a self-sustaining financial model. It’s still close enough to the original love of software that Linus strongly approves. I really enjoyed your parent comment’s take on grant-writing. I do, however, expect that this is already taking place in the appropriate areas of Linux development more than is commonly realized.
You're not wrong about defense however there is a huge caveat. From what I'm told, when a company in defense (in the US) uses FOSS they must sign a waiver saying that they will be responsible for all updates and vulnerabilities discovered with said library. This also means that if the library stops being maintained, the company is responsible for maintaining the library.
I know that the reason why RHEL is preferred over Ubuntu or other distros is the fact RHEL has a paid license and provides regular updates to even "dead" or EoL libraries and packages.
Having said that, I wish more companies would take that dive. I've been in situations where the company would not sign a waiver so we had to essentially re-create a functionality that a FOSS library already provides.
Thanks for the link, was not aware of that opportunity. It looks like this might also be a possible source of funding for theorists who write software tools.
Disk I/O. It's basically all blocking except the fairly recent io_uring which is much more than disk I/O and perhaps a bit daunting for "I want to write some data without starving my main thread." I wish you could use select() and poll() on regular files on disk, like you can for almost all other file descriptors.
Sure, every application does that, but the kernel could do it better. What if I'm writing to two filesystems and one of them might have high latency sometimes? Do I make two threads for disk I/O to avoid starving one write when another is slow? The reasons for multiplexing network I/O apply to disk as well.
Not the kernel, but the OS. Backwards compatibility and having GUI libraries LGPLed.
The Linux kernel has a rule not to break users, and for the most part, it is followed. This is not followed in userland where glibc and GNU GCC C++ library do break users with they accidentally do not follow the spec. GUI libraries like GTK and QT break binary and source compatibility every decade or so. Windows tries it's best to keep it and that makes people want to invest in it.
Now LGPL libraries, it is very difficult to write a self contained proprietary GUI binary on Linux. This is desirable because of the above, GUI libraries like to break compatibility every decade so you want to include your own so it can run in the future. Sublime has to write their own GUI frontend. Most proprietary developers use electron.
A solution to both is Iced, but it hasn't been out that long. Likely will try writing a small app in it later. Mit Licensed. Wayland and Vulkan support. https://github.com/iced-rs/iced
> The Linux kernel has a rule not to break users, and for the most part, it is followed. This is not followed in userland where glibc and GNU GCC C++ library do break users with they accidentally do not follow the spec.
I do think it's a bit harder when the spec is an external standard rather than just a de facto standard of your existing behavior. The kernel doesn't claim or aim to be fully POSIX compliant, so they "just" need to guarantee that they don't break their existing functionality. This isn't a small task by any means, but C and C++ compilers need to do this on top of ensuring that their behavior also conforms to the standards; it's essentially another guarantee they have to uphold beyond what Linux has to deal with. I don't have anywhere close to enough expertise to make any assertions of whether they've done a good job at upholding even the base guarantee of backwards compatibility though, so this point might be moot.
Windows definitely stalls less in those kinds of situations. I remember running both Windows and Linux on my old 2 core 4GB RAM laptop. On Windows I could have easily swapped out almost ten GB of memory and it'd still chug along, whereas Linux would be completely unresponsive after a couple of GB of swapped out memory and require a reboot. I wonder how much the pending MGLRU patch for Linux 6.1 would help with this.
The old Windows Task Manager has code in it that killed other programs to be able to start so the user could kill more programs.
I don't know if this feature still exists in modern versions since memory has increased so much but I always found it to be neat that some developers thought about this.
Sounds like a good idea for systems that are absolutely hammered. `htop` could add a `--kill` flag, or maybe `--kill-ram` / `--kill-cpu` which would first try to kill the most ram/cpu intensive application, and then load the UI.
I've ended up in a couple of situations where servers are so hammered it takes five minutes just to get a ssh session up and running because some process took up all CPU/RAM and was barely able to do anything in the server itself. Something like that could maybe help.
David Plummer wrote the Windows Task Manager, he made a 3 part series on his YouTube channel [1]. It's a fascinating couple of videos, highly recommend to watch.
Right? It's getting worse and worse with each update.
I remember praising mac os memory management back in the days when you could just go to "Applications" folder and open everything at once without a hiccup.
Today, even rebooting with "reopen apps" option is a nightmare.
If that was the case, macOS and windows would own the server market because who would want to have a server that "freezes" under "heavy memory load". MacOS is virtually non-existent and Windows has a small share of the market and performs poorly comparison.
I linked threads on LKML that are directly relevant to this subject, about how this (system becomes nonresponsive under memory pressure) is a real problem that people really experience, with a lot of people agreeing that it is a problem and suggesting various mitigations. Most of those ideas presuppose that memory pressure leads to stalls and are about figuring out how to detect when the system is stalled and making no progress and killing processes after it has gone on for a while (often with userspace daemons). I think MGLRU is maybe getting merged in linux 6.x and will hopefully help avoid the stalls.
you linked a list of all CVEs in macOS. I don't see how this is relevant at all.
The first link is a report that the system becomes nonresponsive under memory pressure and a discussion of that report. In that thread, there was not really any agreement about whether this was a bug, let alone what the bug was, or what the fix would be.
The second wasn’t a bug at all, it was a patch proposed set of knobs to allow users to avoid the problem.
I vaguely remember Solaris had a way to prioritise focused X Window application (IA) while keeping it within time-sharing (TS) scheduling class.
Kernel thread for interactive applications in focused window would then have higher priority than background applications improving responsiveness. I am not sure what manipulated the priority, but I assume Xsun (X Window server) did it.
Ok. I got it. Literally speaking there are two (maybe more?) known bugs. Do we know how many are there for Windows and macOS? I get freezes every now and then under macOS, so there _might_ be memory management issues there as well.
My point stills stands because I believe that the bugs you posted are largely inconsequential. Bugs could be very well been patched by now (or not). My guess the fix won't uptick the % of linux in desktop or server market.
IMHO "the truth is in the pudding": if the largest share of mobile OS's is based on linux (android) and largest % in servers in again based on linux, memory management is overall good enough - otherwise the market share would have dropped considerably.
UPDATE: Revisiting the thread question... I'm barking at the wrong tree. The question wasn't about relevance and market %. Your comment is to the point.
The concept of device drivers being within the kernel is a terrible idea. It should instead offer a stable ABI for drivers to be external and (within reason) be independent of the kernel version.
Disagree but not because of philosophical reasons like “I want to fix my driver and know what it is doing”, but actually practical reasons — tons of companies make a driver plus hardware and end up semi abandoning it.. but some people in the community with open source drivers will often fix it.. then after some time people end up realizing the way they fixed it is better than the original driver and now the companies actually benefit from that new driver. So you end up with a win win for people and companies. So in general closed source drivers end up sucking compared to open source drivers except for very rare situations.. and even after massive battles with say nvidia they are starting to realize they will end up benefiting from working with the open source community rather than against them..
This a hundred times. Windows drivers are fine until the day the piece of iron they manage becomes obsolete, which often happens to still modern and perfectly functional hardware. It's not uncommon to see hardware that isn't supported anymore by Windows since years working perfectly under Linux. I should have ditched most of my audio cards ages ago if I didn't use them with Linux.
On the topic of open-source vs closed-source drivers:
Maintaining an open-source kernel driver is extremely complex and not user-friendly for the individual, so the complexity level is in practice similar to doing black-box reverse-engineering on a closed-source Windows driver, in both cases you need significant resources.
If you need to have a team of experienced kernel developers to maintain a driver continuously, that same team will be able to reverse-engineer & reimplement the closed-source driver just as well should it become necessary.
On the topic of lack of stable ABI under Linux, I find that closed-source Windows drivers have a much longer shelf-life than their Linux equivalents, so in practice I feel like the downside of drivers being closed-source in Windows is less of a problem because in practice you're less likely to have to modify it.
Distributing drivers is also a massive difference between the two models. With Windows, if someone builds a driver, that binary can be installed by any Windows user on a modern kernel (depending on what "level" of the ABI they're building against, the same driver can work all the way back to Windows 7). Once built, any Windows user can just install the binary, where as with Linux the built binary will only work for that very specific kernel version. The user-experience is definitely much better.
> I find that closed-source Windows drivers have a much longer shelf-life than their Linux equivalents.
Just to be clear, if you mean kernel drivers I find this -very- difficult to believe.
There are linux kernel drivers I'd love to see die that will never die. If you're talking about 'downloads from some guys website', you may have a viable example but this is not the common case.
Code lasts -way- too long. For example the ISA BUS is still supported, aint no win10 drivers for ISA soundcards.
> On the topic of lack of stable ABI under Linux, I find that closed-source Windows drivers have a much longer shelf-life than their Linux equivalents, so in practice I feel like the downside of drivers being closed-source in Windows is less of a problem because in practice you're less likely to have to modify it.
I recently helped a friend with her Mac and her "old" Wacom tablet. She now has to download an old version of the drivers for the tablet to work, with some manual configuration to do related to security. It took us long to figure out, and actually she wasn't figuring it out on her own. Plus, how long will she be able to use her tablet on Mac, since the driver is not maintained anymore?
I plugged the tablet on my Linux computer and it worked out of the box without doing anything. I trust this tablet will still work on Linux for a very long time.
I trust the stability of the Windows ABI more than the Mac one, but on Windows you would still have to install some old unmaintained binary that could have security issues to make this tablet work.
Do you have an actual example of an open source Linux driver that stops working when the closed Windows one still works? Are there many of these examples? How many against the opposite situation?
> Distributing drivers is also a massive difference between the two models. With Windows, if someone builds a driver, that binary can be installed by any Windows user on a modern kernel (depending on what "level" of the ABI they're building against, the same driver can work all the way back to Windows 7). Once built, any Windows user can just install the binary, where as with Linux the built binary will only work for that very specific kernel version. The user-experience is definitely much better.
What actually happens on Linux most of the time is that the driver comes with the kernel and the user does not need to install anything. This is unbeatable.
> Maintaining an open-source kernel driver is extremely complex and not user-friendly for the individual, so the complexity level is in practice similar to doing black-box reverse-engineering on a closed-source Windows driver, in both cases you need significant resources.
I'm very surprised. Fixing API breakage to make an open source work again seems way easier than reverse engineering the whole thing.
Not ironic, kind of the point. If we cared all that much in the linux world about a stable driver ABI we could literally just implement the windows one, like reactos did.
You see my thought is that a lot of proprietary drivers end up being half assed, and kernal code tends to be pretty good. Your mileage may vary though.
Making any changes in a code of a driver without driven device is a recipe for a disaster. You have no way how to verify that change you made did not affected functionality of the device.
whenever I try to install software on linux I inevitably end up googling errors to find out what dependencies I'm missing. Install failed because i'm missing "python.h" header for some C dependency, let me make sure python-dev is there, okay how about libffi-dev, apt install gcc-arm* fixed it for this rando on stackoverflow? okay now the installation fails in a different way, wait a minute does this python library expect rust is already installed? wtf?
people complain about NPM but at least the dependency resolution works
There used to be something like apt-build which downloaded sources of whatever you need using your distros source repositories. I don't know if i remember the name right. I don't think this is related with Linux only, but more cpp oss dev problem.
I dont think Linux did wrong, but, were we to fund new kernels, distributed capabilities (likely via object capabilities) would be great. Thats a huge subject, with many avenues, but a kernel as an isolated thing seems like a less ambitious core than what we could be doing.
Controversial, but I'd probably take QUIC or http3 & try to build distributed capabilities on that. Having fast QUIC in kernel, & available to userspace would be very much "heading towards where the puck is going".
It's notable how bad Linux async i/o & others had been. Uring (its much more than io_uring these days) has totslly rewritten the rules, has been a giant leap into modernity. But it's so new! Leaning into uring style completeable syscalls would be essential.
eBPF has shown that programmability of the kernel space opens hella doors. Trying amplify this winning wpuld be great.
The past 10 years have really seen Linux grow all kinds of fast excellent capabilities, griw into itself. The above uring & ebpf examples are high examples of this. DMA-BUF has gone from up-and-comer to become the core win for much of the kernel, for how desktops & media processing are fast & good & flexible. It'll be interesting to see, but I have a hard time imagining more big boom events that push us way forward again continuing.
The Linux kernel is as good as it gets, in my opinion.
Desktop Linux is another story though. On one end, you have GNOME devs who try to imitate macOS without the UX design knowledge and taste, and KDE that is buggy and crashes a lot. If these teams merged they could create something that will rival Windows. I guess fragmentation and lack of good desktops it is.
I don't like that GNOME seems to force itself into situations as something more than a desktop environment. I mean all the extra libraries that somehow end up being dependencies in my non-GNOME system. And gconf, i.e. the Linux Registry, or whatever it has been rebranded to.
Similarly, GNU Info pages when well-written man pages suffice. Split by topic - there's already a mechanism for this.
I don't want to compile a basic utility like `curl` or something and then discover that somebody decided it has hard dependencies on graphical libraries and applications (even if once- or twice-removed).
KDE is superb these days, after the slump that was KDE 4. However even KDE developers seem think that a tic-tac-toe game needs to have its board abstracted as libtiktaktoe in case anybody else ever needs to use it.
I had thought that Wayland might be an improvement. It seems to be a regression. I recently learned that the trade-off for avoiding tearing in Wayland is increased latency compared to X11/Xorg. I don't want to care about my graphical display manager. X is not perfect, but it is tried-and-true, and it does not leave me struggling to share my screen in Zoom, or have legible fonts in my creative applications.
Systemd: Binary log files. I should be able to configure my system with `sed` and `awk` scripts that easily modify text configuration files.
On this note, applications should use plain text files for their data more often. For example, web browsers could use a hierarchy of text files, each representing a web page in history, or a bookmarked site. Sqlite is good, but a database is simply unnecessary in many cases, because filesystems are very optimized and perfectly useable for many cases. This encourages hacking and interoperation. I could comfortably browse through my bookmarks in a mature native file system explorer of my preference, instead of struggling with a tricky scrolling menu within a browser. If I want to change browsers or computers, there is no more export/import of links and then they get messed up anyway.
DBus: I'm not sure why the same functions could not be implemented robustly as a standard structure of named pipes.
SELinux: A tremendous gift so often squandered by being disabled with the global flag.
> Sqlite is good, but a database is simply unnecessary in many cases, because filesystems are very optimized and perfectly useable for many cases. This encourages hacking and interoperation. I could comfortably browse through my bookmarks in a mature native file system explorer of my preference, instead of struggling with a tricky scrolling menu within a browser
SQLite is very good indeed. It's fast and offers a nice querying API. You can use SQLitebrowser if you want to browse this data outside the browser UI, and we could always have a FUSE filesystem for sqlite database if we really wanted to be able to browse this data with a file manager. Since SQLite is used everywhere, the effort would be shared. One of the many good things about SQLite is that it is (de-facto) standard, way more than regular configuration files.
> Systemd: Binary log files. I should be able to configure my system with `sed` and `awk` scripts that easily modify text configuration files.
Configuration files are still regular text files, and are way easier / simpler than before systemd / upstart. You are also a command away from having a text representation of the logs, and that's journalctl, which can be piped into the regular unix tools, so I have not found an actual situation where it matters.
Android took the same kernel and built a massive eco system by solving the app deployment problems.
Not the kernel but deployment of binary apps is almost impossible because of a fragmented platform and distros not providing a universal set of guaranteed, ABI compatible libraries.
A project like what, exactly? Linux wasn't some high-minded academic project that would pick off a menu of contemporary systems research or software engineering goals.
The early Linux project was a punk concept of its era. Linus very clearly started out as wanting to make a Unix-like kernel on commodity x86 PCs of the time. It was a personal, workstation hack for motivated tinkerers. Things like server and supercomputer usage were not imagined yet.
I'm not even sure what a moral equivalent project idea might look like today. It was a typical hacker blend of wanting to emulate something that was scarce/expensive using less expensive resources. Today, the smartphone might be the equivalent commodity platform. Both iOS and Android might be equivalent to the MS/DOS+Windows hegemony. But there isn't any equivalent of the balkanized and expensive Unix vendor market for phones, which technical students might aspire to. Linux didn't set out to rewrite the popular commercial OSs of the day from Apple and Microsoft.
I think one of the problems was to provide only the kernel and leave implementing the userland to people building different distributions. This has led to an incredible amount of replication of work, inconsistencies and incompatibilities and IMHO was one of the reasons Linux on the desktop never caught on.
Messed the desktop experience. The level of main desktops (KDE, Gnome) compared to macOS / Windows experience is poor to the extend that I'm not sure why we have the level of fragmentation we do.
On the other hand, window managers under linux are freaking awesome (no pun intended).
Little more philosophical with less practical examples, and a bit ranty, comment incoming:
Linux did not proactively foster standards on anything beyond the kernel.
Sure many would shout "not their job!" but here we are in 2022, every disagreement in an obscure chat channel or a forum only 500 people on the planet know exists leads to a new distro, or a fork of a high-profile library, or yet another pre-built KDE/GNOME configuration claiming usability / accessibility and whatnot. Fragmentation abounds, egos are flying around. Feels like a kids club arguing with another kids club about which Transformer toys are the coolest.
Like, come on already. Agree on a common goal and start pursuing it. Moderate aggressively -- if somebody is being a dickhead just boot them out. Done. (Would have saved at least 17 people no less than 10 hours each, judging by several random encounters I witnessed some years ago.)
---
Non-exhaustive list:
- We need strong schema for cooperation between tools. A tool like `jc` that strives to parse various UNIX tools outputs and convert them to JSON shouldn't even exist yet I find it indispensable for most of my throwaway scripting or homegrown data-science needs.
- We need less copying between user space and kernel space.
- We need to start moving away from "everyting is a file" and have some standardized SQL-like interface to the OS (or any other query language that makes sense for the task; people will adapt).
- We need less Python scripts being responsible for, well, almost everything. It's collectively embarrassing and makes a ton of distros very brittle (upgrade from 2.7.X to 2.7.Y and hilarity ensues).
- We need to start bundling more things into the OS by default. People love standards and that's the (apparently inconvenient) truth. Endlessly parroting "freedom!" has gotten us nowhere as it's very clearly visible by the state of anything non-kernel related in Linux land.
---
A lot of things from the 70s and 80s still make sense today. But some no longer do and we should recognize that and start moving forward because that old way of doing things is actively hampering progress of the entire IT area.
And no I truly don't care if I get automated out of my job as a result. At one point the area has to progress further. We are in an evolutionary cul-de-sac.
---
It really does seem that non-kernel Linux work is mostly picked up by very immature people who don't seem to be professional programmers at all.
i love and use fish, but it doesn't solve the problem of everything is text. it's not even a shell problem in my view but it's really a problem of what kind of data is piped from one command to the next. that's quite shell-agnostic
Linux did everything right, if you want a kernel that looks and behaves like Linux. If you want a kernel that does something differently, see the other replies in this thread. All of these things (such as a stable driver ABI) would radically change Linux, and are probably very bad ideas.
> Or, if you have enough funding to write a new kernel, what would you do differently?
I'd make an open-source kernel inspired by the best parts of Windows NT. Not a direct clone of 30+ years of legacy APIs (ReactOS), nor an emulator trying to fit the square peg of Linux into the round hole of Windows (WINE).
The desktop experience is still a mess. They’ve become complacent with ‘it works out of the box’ but that’s not the bar. Apple sets the bar and there’s no prize for second place. You’re on Apple or Linux or brain damaged and using Windows, but nobodies using two as their primary driver, and so one ecosystem dominates. I’m glad it’s Apple because the GNU crowd are virtue signalers first and software developers second. For all their yapping the linux desktop speaks for itself
I use MATE as my daily driver and macOS for work. MATE multimonitor support is so much better that I'd rather stick with that and deal with the other stuff where it's not as good as macOS.
I felt the same way and was a macOS user on the desktop for close to a decade. Having used Plasma Desktop via a rolling release distribution as a daily driver dispelled that feeling, though.
It looks good, it's fast and feels professional. The jankiness I used to associate with desktop Linux just isn't there. It's actually stable and reliable, whereas on macOS, I'd run into issues with Bluetooth, or with WindowServer or some other opaque macOS service hogging 100% of CPU time for no apparent reason. Linux tends to have better hardware support, as well, especially for older hardware that might have older drivers that don't support modern macOS.
I'm of the opinion that if someone's needs, even a layman's needs, can be satisfied with ChromeOS, then Plasma Desktop with a browser would satisfy them, too.
The closest I have seen to creating a good experience was Ubuntu Unity.
I noticed it being used in universities, and in the office I used to work at.
I'm disappointed that the Unity project wasn't taken forward, and it seems to me that Gnome just reimplemented it, but missed many best parts of the original.
Could you explain more about about the "messy desktop experience"?
Many years ago, around 2006-2008, my PC exclusively ran Slackware after saying goodbye to XP.
It wasn't really beefy, so I installed FVWM. I could do all the university assignments on it. Internet browsing, multimedia, USB plug n play, printing etc etc worked fine. Guess my use cases were kinda minimalistic.
These days I mostly work on Mac, but still have a headless Linux PC running for personal tinkering. Probably I need to have another taste of Linux desktop experience :D
> These days I mostly work on Mac, but still have a headless Linux PC running for personal tinkering. Probably I need to have another taste of Linux desktop experience :D
Try out modern Plasma/KDE, it's actually really nice. I say this as someone who has disliked KDE for a very long time before using Plasma.
Linux' memory model is overengineered and confusing. If I were to design a new kernel neither overcommitting nor virtual memory would be enabled by default. That would make it much easier for userland processes to manage their memory.
Linux's fundamental, unsolvable problem is that it's not an operating system. Instead, it's a kernel, with a bunch of other systems stapled to it. This creates headaches both for new members ("Which distro should I use?") to users with a bit of experience ("Should I use X11 or Wayland?") to the admirable people down in the trenches ("alsa or pulseaudio?"). Linux prides itself about its flexibility, but all this does is cause long annoying conversations at every level. Vi, Vim, or NeoVim? These kinds of conversations don't happen with other OSes because other OSes are _fundamentally centralized_ in a way that the Linux community isn't and probably never will be. Right now in Windows there are essentially 2 ways to run Linux programs - VMs and WSL. If Windows used the Linux model, I can very easily imagine a dozen different ways to run Linux programs on Windows, all with their own upsides and downsides and rabid fanbases, with the consumer stuck in the middle of the intersection wondering which way to go.
This fragmentation has real downsides, too. I have an odd multimonitor setup with one vertical monitor and two horizontal monitors. I can configure this in KDE's settings, but the login shell is a separate application and needs to be fixed by googling the solution and changing some config file. I've never had a Linux distro that, when I configure the default monitor when logged in, also uses that setting when logged out. In all my years of using Windows this has never happened!
In some ways, this is one of the ways in which a company controlled project has more of a chance to succeed than a community-built open source project. They have a financial incentive to stick to one plan and do it well, rather than develop a dozen competing products and brute-force a good solution. The user-facing areas of Linux that do the best are projects like Android, Ubuntu, and Pop OS, all of which have a company that keeps them in line.
This is to say nothing of the google-centric UI of Linux. Don't know how do do something? Google the answer, sifting through multiple possibly out-of-date forum posts, or excavate through man pages to find the answer. In other operating systems, you can confidently rely on the OS helping you out, but not Linux. This behavior is often dismissed as "hand-holding". Well, I suppose I want my hand to be held then! I don't want to waste time finding the answer online, I want the OS to show me to the answer so that I can find it for myself. I can never remember the correct syntax for systemctl commands; I don't use them often enough to memorize them. So I have to look them up every time. Windows, meanwhile, has provided a GUI (that I know will exist on every Windows machine) that I can use to stop and start these things. In fact, Windows has provided a tool that I can easily use to troubleshoot the system, without having to google for commands first, that is easily accessible. Meanwhile the Linux answer is "here, learn how these dozens of commands and how these dozens of config files work; if you encounter enough problems you'll start to memorize stuff." The stench of Stockholm Syndrome is strong.
People complain about the difference between Program Files and Program Files (x86) in Windows. Well let me introduce you to the dozen different places to install an executable in Linux! Also, half of them are sim-linked to the other half in an effort to decrease confusion (they seem to have failed there).
Yes there are problems with Windows. The registry should have been an actual database, dark mode isn't os-wide, and most of the settings still haven't been corralled away from winforms into the new settings app (though you can't seriously be suggesting that Linux's "put a config file anywhere in whatever file format you like" system is better). I want to like Linux but every time I try it I end up having frustrating problems that take way longer to solve than they would on Windows.
Edit: Holy moly, sorry for the accidental essay. Hadn't realized how long this had become
So Linux is worse because you are more familiar with Windows?!
Also I am running into all kinds of problems every day with Windows and end up on Google all the time with way worse answers, no way to deeply debug most things and worst no way to fix them myself. I am way to often depending on some big capitalistic company to fix my problems and hope they feel like doing it. Linux is DIY capable. If I have the abilities and time to fix something I can most of the time do it myself.
Also to answer some of your questions:
- you want to use Pulseaudio, alsa has no usecase on a desktop
- and use vim or neovim if you are into plugins. vi is just an old editor which got replaced by vim.
To your first point, yes this is a downside of Windows. And if I was in IT I'm sure there are many things about Windows that make it a pain for deeply debugging these problems. But the dark days of Vista are long over - my PC rarely crashes, and when it does it's pretty obvious what program crashed it. If not, Task Manager is hard coded into the OS to try its best to open. Linux has the shell, but the WM can freeze to the point where I cannot get to a shell to start troubleshooting. If I can, hopefully I have my phone on me to google the commands I need, because (perhaps luckily) my computer doesn't crash often enough for me to need to know the commands.
"DIY capable" to me is code for "we didn't finish designing the system, so you have a bunch of choices to make that you wouldn't need to make on another OS." Yes Linux is great for those that love to tinker. But I don't, I just want a PC that works the way I expect it to.
> you want to use Pulseaudio, alsa has no usecase on a desktop
You'd be surprised how many people still swear by alsa-only. Other people prefer jack, and now you can add pipewire to the mix... plus the few people who like OSS4. At least old school OSS is dead for good.
PipeWire is a solution for people who want it all though. It's replacing PulseAudio and Jack. It should simplify the stack. OSS4 seems anecdotal in the Linux world. People can still use ALSA directly, but people who don't care will have something that works well out of the box without having to care.
Thanks to PipeWire, I can use my midi keyboard with no delay, still use my browser and music player at the same time, all this without having to tweak any configuration, messing with Jack and/or PulseAudio.
With Windows, even if I don't know the exact steps to do something, I can figure it out because some smart people have done a lot of UI design work to make that so. Off the top of my head, I can't tell you the exact steps on how to change the mouse speed on either Linux or Windows - I don't do that often enough to memorize the process. But I know in Windows, it will involve navigating to the Settings app, so I go there. The mouse is a device, so I click on "Bluetooth and Devices" in the left taskbar. I then click on "Mouse, which takes me to the mouse speed. The process in Linux depends on your DE. If you use something like KDE, the process is similar to Windows; but if I'm using i3, I'll probably need to go into the x11 settings. But I can't google "how to change mouse speed in Linux", because again, Linux isn't an OS even though it gets treated as one.
So the end result is the same, I can change the mouse speed in both OSes. But Windows definitely required no prior research, whereas Linux may or may not have required research depending on how the user set up their machine. And if I was new to Windows, I could google "how to change mouse speed on Windows" and get a straightforward answer. You could argue that on linux, I should have googled "how to change mouse speed in Ubuntu/Pop OS/KDE Neon/etc". The problem with that is that sometimes I can use generic Linux instructions, and sometimes I use distro-specific instructions. People actually point non-Arch users to the ArchLinux wiki!
Incidentally, I looked at the general recommendations page on the ArchLinux wiki [1], and found it seems to recommend alsa as a default and pulseaudio if you want some more complex features. Maybe this is wrong, maybe there are differing opinions in the community. Again, there's no way in Windows that you would run into two competing ways for sound to be sent from programs to your headphones/speakers because of the unifying force of the company. (Personally, I've always wondered why audio isn't just part of X11 or Wayland, since X11 and Wayland seem to control everything else the user interacts with.)
I also know from personal experience that Vim vs NeoVim is a complicated discussion and full of a lot of drama over the maintainers of Vim being hard to work with - yet another problem that wouldn't happen at Microsoft because you have a management structure that can deal with these sorts of problems without defaulting to the lazy open-source solution of "just give the customer more options."
Seems just like my experience with Windows, I am not sure if it's driver or software issue though. My printer is often reported as having an error on both my windows laptops and does not allow printing (gets stuck in the spool queue) , but prints fine from ipad, android or Linux. Even a clean install of Windows did not help.
Realtime audio. Audio interface makers don't even release Linux drivers. I wish the situation could improve so that I can use Linux for both work and music production.
yes, plug and play, but if you want the top performance from a top manufacturer (Universal Audio or RME), as well as a ton of utility software written by the vendors you need the official drivers and support. And I do need top performance, the community drivers are unsatisfactory. BTW I agree that there's some progress, albeit still years behind Windows and Mac.
I don't know about kernel, but from my experience I'm generally struggle with philosophy "From nerds, to nerds." Most problems require sudo this and sudo that which led to multiple new problems and overall it is not user friendly. It gets better with development and also with user experience, but it is damn hard to get into linux as noob. Also imho default values for settings are often not wisely chosen.
As professional artist I still cannot use linux for work unfortunately (no 10bit support, no professional SW compatibility, GPU drivers mess, crazy difficult settings of VM dGPU passtrough, etc.).
it's hard b/c linux ended up crushing every commercial competitor outright. here we are asking where it went wrong. while the point of free software (presumably) is not to crush commercial competitors, i'm not mad at it.
Linux development I think could benefit from a more coordinated and staged approach to development.
A more competitive operating system would be based on Gentoo. After all one of the most, if not, the most popular os for laptops is Chrome, or was at some time at it was based on Gentoo if I read that correctly. Correct me if I'm wrong.
To make it more friendly to users of Debian (and all it's children) I think it would be great apt commands ran transparently to compile, optimize, and install and remove software without breaking things.
Because of the time involved system vendors like Dell can make a repo for each system they offer if from top to bottom the build process is coordinated.
In between hardware and basic os design, a fully fledged desktop is desireable for new machines and I think KDE's desktop is sane enough to work well even if it does consume more memory. But everything is perfectly optimized. A new system could boot in just 7 seconds or less. There would be no snap, or weird repos you have to trust with your life with.
AppImages for apps would be fine if the system would always know where they are and how to update them.
I know basically nothing about gentoo and how to install and use it. I just know it is a major pain in the ass. But the thing is, this is the only way to make a professional operating system for the general public. Google proved this with Chrome OS.
Making everyone become a master at using portage and understanding what all these options is not needed if the blanks are filled out in advance and doing something like:
Being able to do that, boot quickly, stable systems tailored by the manufacturer, its really full stack control. The Ubuntu Stack is looking really nasty recently. I hate having to rely on it.
Making a good operating system should be as easy as going to a vending machine, pressing buttons on a screen, and out pops a usb stick which is brain dead simple to install without destroying everything in your system. Perhaps a packet comes out from the machine that explains how to avoid data loss.
The vending machine would be like menus at a store. You select what kind of cpu you use and you can be as nerdy about it as you like your make and model of computer. It could be like something you would find at MicroCenter. A little kiosk that Dell would make to cook up a custom system and put it on a fast disk that some one at the store can help the elderly preinstall it.
The musicians store kiosk spits out usb drives that are preloaded with vetted software. Paid for software can be selected and the music store gets a cut of the royalty. Whatever goes on that disk just works. Can we just do?
The Game store sells USB drives that contain only one game. It has just what it needs to play a game on the system type you select.
Staging development could help improve software quality. It is always frustrating on ubuntu trying to find the latest and greatest. Wouldn't it be nice if that software was also byte code optimized? Maybe its only 10% better. But we have limited electricity and every little bit savings helps.
Quality is ensured so if your system isn't listed then too bad for you. However you could try your hand at compiling your own from a middleware distro that had a similar kiosk interface but more options and warnings of what may or may not happen with your selections. What the kiosk pops out is always 100%.
A kind of royalty system can circulate the kiosk service for different stores. System makers can get involved on the demands of stores that wish to serve up desktop systems.
There is always a demand for newer better software and if the integration is always simple, people are willing to pay for usb sticks.
Linux as an ideology is like all ideologies, they are flawed. We just want software that doesn't disrespect us. We want a clear relationship with our venders. We do not wish to be warped into a mind stealing model of business. People would use kiosk software not because the software is all organic free range GNU software. They use it because they trust it and it does what they want it to do.
Security. They have so much ass backwards and it is the hill everyone who matters will die on. Even android with all the turd polishing they donis still a wasteland. Strawman arguments and whataboutisms are a hard drug.
I can tell you as an end-user, linux is very difficult. It does not have to be.
I am saying this as a person with a computer science degree and worked in the field for a while, although I no longer do. My point being is that I have more experience than the average user, but still think it is very, very difficult.
I could spend a lot of time learning it, and do it fine, but I just don't have the time. I have other larger issues to deal with.
I started off with MS-DOS, so I am VERY used to command lines. But I hate them now. I just cannot bear them. And for a lot of stuff, you have to string a bunch of fucking command lines together. It wouldn't be bad if I was a sysadmin and doing the same thing over and over and just automate it. However, I only usually need to do things one time, so it is a real pain to have to type these long lines of command lines, where it is easy to make a mistake when typing it in, when in windows, all I have to do is click and move an icon for one-time operations.
There's so much on linux that sure, I know how to look stuff up and do it, but it's just a pain, when these things I KNOW can be automated by someone else and put into a GUI or somewhere. Some kind of apps included in the distro.
One thing that has seems to have got better is downloading actual linux. It used to be:
and it would go on for about 20 different versions. that sucked ass big time.
Just give me a button that says "Download"
I totally understand the purpose of putting all the versions there. But it is stupid. Just put "Download" and another button that says "Choose which version of download you want" / expert"
And most do this now, so it is great. But as recently as a few years ago, none of the distros were clear as to which one should be downloaded.
.
When it was the very last moment, very last day to use Windows 7, I decided to switch to Linux permanently. But, as I did it, I found four apps that I HAD to use, did not work on Linux. I tried to do emulation and stuff, but it just become too big of a nosebleed, because there was no button that automatically would load an emulator, that I found, and that with a bunch of other stuff, was just taking up too much of my time. I could LEARN, I know I could, but I just had no time to fuck with it anymore.
On the other hand, installing Windows 10 was "Download" and it downloaded and it worked. Nothing else had to be done. I didn't want to go to Windows 10, but I just didn't have time to mess with linux.
Linux is very difficult. I know this because I have a degree in CS, worked in tech, and was in charge of acquiring new tech, learning the new tech, and teaching the new tech to others. Learning was my job. So I know I could learn Linux, but no time, and it is too difficult, if it is difficult for me, then it is a real pain-in-the-ass for someone who has no experience at all in computer systems.
I think that linux distros should just only have complete newbies on the team to test everything and let the distro team members know what is unclear.
Before if you knew the standard unix-ey tools you could get by. For example, want to list services? Just use `ls /etc/init.d/`. Want to monitor a log file? Use something like `tail -f /var/log/whatever`. Want to see what filesystems are supposed to be mounter? Just opne /etc/fstab with your favorite text editor. Sure, it wasn't super consistent but you used the same tools as you always used, whether you're editing source code or whatever. You had one very flexible toolbox that you had to remember the options on.
So as an example, how do I see what services are available under systemd? How can I reliable see what file systems are supposed to be mounted at boot? The answer to both of those tends to be "systemd spreads it's files and configuration all around the filesystem to you need to learn some bespoke systemd command that you'll only ever use for systemd".
And while those previous init systems were inconsistent projects like openrc demonstrate that it's possible to accomplish systemd's goals, greatly simplify that init system mess, without breaking principles like locality of behavior or "everything is a file", and to keep it compatible with things like grep and tail and all that.
>> How can I reliable see what file systems are supposed to be mounted at boot?
I still use fstab for this.
>> Sure, it wasn't super consistent but you used the same tools as you always used to.
You can still use most of the same tools. I use tail,grep,awk like I used to, but I like the fact that with systemd I get a system configuration layer (which includes the init system) rather than having different tools in each distro.
Do you prefer the old init system because it is something you are used to (and do not want to change) or do you have specific instances where systemd is broken for you ?
> With systemd, You can use journalctl to tail the log file by journalctl -u <unit> -f , so something like journalctl -u postfix -f
Last time I seriously used systemd I ran in to occasional log corruption. I've never not been able to just tail or cat log files even if they became corrupt (somehow?) but for whatever reason systemd can (or could?) get your logs in a state where they're unreadable with its purpose built journalctl tool. For example:
It was a long while back so I hope it's fixed by now, but I'm skeptical of the quality of the whole system given how much worse it was at reading files. (Compared to grep, which has never failed me.)
Sure, but how is that an improvement over locality-of-behavior and having unit files in easily known locations? Here's a rough list of places you can find unit files:
#Places you can find systemd unit files
/etc/systemd/system/*
/run/systemd/system/*
/lib/systemd/system/*
...
$XDG_CONFIG_HOME/systemd/user/*
$HOME/.config/systemd/user/*
/etc/systemd/user/*
$XDG_RUNTIME_DIR/systemd/user/*
/run/systemd/user/*
$XDG_DATA_HOME/systemd/user/*
$HOME/.local/share/systemd/user/*
/usr/lib/systemd/user/*
Now if you simplified that list you wouldn't need to learn a new tool and either memorize or look up a bunch of arguments. You could just have a sane directory structure and use some path globing.
With a bit more work (or possibly just a different balancing of priorities, with more care taken to locality of behavior) I think you wouldn't need some other tool.
I think you're really missing the point here, unix (the philosophy, not the implementation) is a programming environment. Yes, systemd has created tools to let you do most of the things you used to be able to do using standard tools, but there's a big difference between having a tool box and having a program feature.
A more complicated example, inotifyd is very useful, how could I go about triggering a script every time a log file is written to under systemd? When you start looking at these tools as part of a cohesive programming environment than you start to see systemd as full of edge-cases you have to account for. Lets say I want to count how many times a particular event appears in a log (quickly and easily, this isn't production) and send it to my cellphone. In unix-philosophy I can add something like this to my inotifyd-tab, `grep "some-event" | wc -l | pushbullet --push my stream`. How could I do something like that in systemd-land?
>Do you prefer the old init system because it is something you are used to (and do not want to change) or do you have specific instances where systemd is broken for you ?
I can see you're already dismissing the "unix philosophy" complaint, and really that is where I'd like to focus. Systemd will keep getting better in most cases, so that fact that I've had frustrations with it in the past isn't really a big deal. I'd be willing to deal with that if it was actually, you know, better than something like openrc.
But since you've asked, here's a non-exhaustive list of points that I've been frustrated enough to actually document them. I imagine a lot of them have been fixed by now.
Years ago I wanted to run debian on a kobo ereader. Unfortunately the built-in OS image was not running systemd, and the kernal was several revisions out of date. While I had no problem getting a debian chroot running, all of the services were designed to run under systemd, this made the whole project much more of a pain in the ass than it should have been.
By default systemd will kill long-running processes when I log out. Processes like screen or tmux. This has since been resolved, presumably by my distro somewhere, but took a solid while to figure out when that behavior suddenly changed.
When troubleshooting a raid array using a mipsel processor, I had persistent network issues. I took the boot media out for trouble shooting, but when I ~~chrooted~~ systemd-nspawned into the host to try to address the problem, I discovered that journalctl would segault. Thankfully /var/log still had all the entries I needed to fix the problem and binary logging wasn't enabled, or that would have been a much bigger problem. This appeared to be a general issue with running journalctl using qemu-static and binfmt.
For some reason my mother's computer can no longer resolve DNS. Ripping out systemd-resolved seemes to have fixed it, but not before I lost a few more hours.
I don't care where the files are. You're fixating on an implementation detail.
I want commands that achieve my goals. Where they get their info from I couldn't care less.
It seems you're more interested in reverse-engineering your system than actually doing something with it. Which is your right of course, but surely you can see how that can't be the priority of everyone.
It is true, across a bunch of different metrics I listed. Also an embarrassing amount of privilege escalation bugs.
I mean I don't know what you'd accept as evidence that it wasn't working well, but if you get an idea of what could actually change your mind on this let me know. I feel like I've more than documented the issues I've personally had with it.
I don't put your experience into question, I'm stating that I never replicated it.
I have 5 Linux machines and systemd made my life little better on all of them due to using the same commands and not having to pay attention to distro differences. I periodically backup logs and configs, I scan for errors and post them in private channels, I'm able to monitor if a service fell on its face, I have a small naive resource meter, and a few others.
I'm not an advanced sysadmin by any meaning of the word, but I know enough to make my life easier. Maybe you're using your machines in a very different manner that make systemd crap the bed? I'm merely saying that it doesn't do that for me.
> if you knew the standard unix-ey tools you could get by
And if you know the standard systemd tools, you can get by with systemd. It's far from enigmatic, the docs are freely available. Yes, systemd does things differently from what you might be used to, but I don't see how that's an issue beyond having to surf the usual learning curve of a new tool.
> how do I see what services are available under systemd?
systemctl list-units --type=service
> How can I reliable see what file systems are supposed to be mounted at boot?
You can still use /etc/fstab, systemd will parse it at boot time. For mount units, you can use:
systemctl list-unit-files --type=mount
> The answer to both of those tends to be "systemd spreads it's files and configuration all around the filesystem to you need to learn some bespoke systemd command that you'll only ever use for systemd"
/etc/systemd or /usr/lib/systemd for global stuff
$HOME/.config/systemd for per-user stuff
> but I don't see how that's an issue beyond having to surf the usual learning curve of a new tool.
I responded to that in another post, but basically unix is a programming environment and when you start composing tools together you get a powerful solution for quickly throwing things together.
That's kind of the point, it often prioritizes large enterprise needs over the convenience of lone sysadmins. Binary logs are great if you're a large enterprise who is paying to have those logs ingested into an elasticsearch cluster, but less useful if you're not at that scale, as one example. Systemd has great support for complicated log collection daemons, but makes it more difficult to just rsync your logs to some other server.
Systemd does not make people angry, they want to be angry by themselves for mostly religios and nostalgic reasons. Having a well integrated system is really valuable and saves you a ton of work and time. If you want to replace most of systemd features you need like 25 different softwares which all work differently, are configured different and are not that well integrated with each other which means you need to do more things and be way more careful that everything works together.
Devuan only has one and only one selling point: not systemd, which makes it for most people a unattractive choice.
You might say: Let's rewrite it in Rust to ensure memory safety - well, rust support is on the way and while I think it is a good thing, it will not fix everything.
In my opinion writing a completely new kernel would not benefit from the huge effort and experience that has gone into the current kernel - so it must be inferior in many ways for a long time.
If you ask for the things Linux needs the most "polish", in my opinion this would be mostly about the desktop and the community of hating each others work. Many projects are fighting each other instead of working together to fix things, but this also might be a good thing in terms of "competition". The things I would work at are:
And this also is all on the way. The current version of Fedora for example shows many improvements for the daily desktop user. Wayland / Gnome is getting ready to really be acceptable and flatpak does so for apps. Only security is a bit of a todo... but it will always be.I remember the "Linux Touchpad like Mac" article that made me believe again, that people really care... and nowadays with libinput, gnome and libinput-config, the touchpad really works... it is still hack-ish, not anywhere like macOS, but pretty good, and I'm a nitpicker on this.