You're being too dismissive IMO. Do you really want the linux kernel to focus mainly on 4MB picture frames at the expense of everybody else? As an embedded dev I very much welcomed the device tree, having to create a billion variations of the same BSP to support various configurations was not fun. "Conditional code" is more prone to bitrot than anything else since you have to test a bunch of different configurations if you want to make sure that it even still compiles correctly.
Besides you're still free to make your own single purpose micro-optimized kernel with only the drivers you want if you feel like it.
I can believe that today's kernel is harder to get to work decently on a 4MB picture frame than in the past. On the other hand it's a lot easier to work with modern embedded SoCs. I think it's a worthy trade-off and a very good pragmatical decision.
I will take 100% of the time a kernel that /doesn't compile/ because an option has been set that had 'bitrotted' than a kernel that compiles with the same code in a "if (my device tree flag)" block that in fact is equally borken but you'll never know, until days/weeks of debugging to figure out that bit was rotten, and nobody had tested it for years until you.
And in the meantime, that code was rotten, and rolled in in every phone and servers on the planet for absolutely no reason.
At least, when you decide that CONFIG_ARCH_TOASTER is no longer supported, you CAN trim the code out easily. You can't do that with code that is living in shadows.
Code that's dynamically disabled is at least checked by the compiler, you can detect a whole bunch of errors that way (function prototypes changing, using deprecated APIs, naming clashes etc...). Of course bitrot will always happen if code is never tested but code that's never actually built will rot much, much faster in my experience.
Then some day one poor sod actually wants to use the option, he gets 4 pages of compilations error and decides maybe he didn't need that after all. Code that compiles doesn't necessarily work but it's a start...
>At least, when you decide that CONFIG_ARCH_TOASTER is no longer supported, you CAN trim the code out easily. You can't do that with code that is living in shadows.
I suppose you're right but again, I feel like you're optimizing for the wrong thing. For one thing the linux kernel is not too keen on removing stuff in my experience, in general older APIs coexist with newer ones and board support is not dropped willy-nilly.
Beyond that I don't know if I should trust my judgement over yours but I'm sure I'll trust the kernel maintainer's judgement over both of ours.
In my codebase we put our configs behind if (CONFIG_TOASTER) { ... } statements, and then #define CONFIG_TOASTER true/false.
I think this is a pretty good middle ground, where we still get compile errors to help against code rot, but also keeps code needed for special configs disabled/isolated.
> Code that's dynamically disabled is at least checked by the compiler
You can still trivially ensure that kernel with CONFIG_ARCH_TOASTER compiles fine by setting up a build server which automatically checks each and every option.
You don't just need to check any single option, you also need to check every combination of options that might reasonably get used together to make sure that it's handled properly. Given the size of the kernel and its thousands of configuration options testing all viable combinations is not reasonable.
Read the rest of my comment, the Linux kernel is pretty damn modular as it is. Nobody forces anybody to compile all drivers ever written built-in to be able to load them from the device tree. But if you want to do it, you can, and that's great.
If you don't you can still toggle your individual drivers and modules and compile a lightweight kernel. I think what the parent complains about is that the kernel is too dynamically flexible and that adds a certain footprint, in particular in terms of code size. That might be true but again, I think it's a good compromise.
The reason execute-in-place is not super well maintained in the modern kernel isn't because the Linux devs are javascript developpers who want to break everything and reinvent the wheel every month, it's because it's getting rarer and rarer to develop linux-enabled embedded hardware using memory-mapped NOR as main storage so the focus moved elsewhere. You might as well complain about MMU-less systems not being properly supported.
I agree with the 'make your own little kernel' for embedded. supporting those parts of workalike posix for the parts you need and including lwip is really much easier than trying to surgically remove the bulk of linux
Would it be a good idea to have major kernel forks targeting different sizes? Say for example {kernel_tiny, kernel_small, kernel} targeting {<4MB,<100MB,<inf} respectively. Although as you mention I'm not sure it is really a wise choice to use Linux for really small and simple systems.
Besides you're still free to make your own single purpose micro-optimized kernel with only the drivers you want if you feel like it.
I can believe that today's kernel is harder to get to work decently on a 4MB picture frame than in the past. On the other hand it's a lot easier to work with modern embedded SoCs. I think it's a worthy trade-off and a very good pragmatical decision.