What you say works 100% for trivial CRUD applications. Which is also, where SCRUM is still a bad framework, but at least SCRUM works on this trivial level of software development.
When a project is about non trivial projects, people, especially developers can not be easily replaced, because no one can replace a specialist with several years of experience on the spot. What I can archive in one day and what somebody else can archive on a day highly depends on what it is. Write a field to the database? Works. Write a rule engine? I'll probably be a factor of 10x more productive than an average developer. Write some GUI/CSS? A frontend developer is 10x more productive than me.
There are only relative few startups or non startups which need complex infrastructure from a technical point of view...
In reality, there is a strong bias in favor of complex cloud infrastructure:
"We are a modern, native cloud company"
"More people mean (startup/manager/...) is more important"
"Needing an architect for the cloud first CRUD app means higher bills for customers"
"Resume driven development"
"Hype driven development"
... in a real sense, nearly everyone involved benefits from complex cloud infrastructure, where from a technical POV MySQL and PHP/Python/Ruby/Java are the correct choice.
One of the many reasons more senior developers who care for their craft burn out in this field.
Sorry, you lost me there. I am CS and literally every single one colleague in IT with an electrical engineering background where subpar compared to CS students. (Of course I don't claim that is true in generell or for that there are no electrical engineers who can be great developers.)
One background I will respect is mathematicians, it is always a joy to learn from them and the can model very nice abstractions. Again, in my personal experience, no claim how general this is or that there are no exceptions.
This is a different field. Many software engineers have no clue how to set up networks or administer things like routers or firewalls. At the same time IT people usually don't know how to write things like backends for web apps.
I only started using KDE 4, then quit and came back for KDE 5. Now I'm running KDE 6 on Wayland and it the most stable it ever has been. There is a significant push for bug fixing and polish. Port to Qt 6 started a major release cycle and I expect it to mature a lot in the next few releases.
I used MermaidJS for diagrams in one of my projects.
Initially it was fun, although MermaidJS lacked support for the diagram types I used at that time.
What stooped to be fun, is that MermaidJS needs a whole Chromebrowser engine to render the diagrams and it is built on Node.js.
Trying to generate my diagrams in another environment again was a total PITA and not worth the time.
I would love if the MermaidJS devs would port everything to another tech stack, don't use a whole browser engine to render their diagrams and make diagram creation easy and reproducible for build piplelines.
> make diagram creation easy and reproducible for build piplelines.
Up to you, but PlantUML is one .jar file and thus very easily renders in a pipeline. So, between waiting for MermaidJS(!) to port everything to another tech stack or using a tool that has a ton more diagram types than Mermaid and has an offline render tool ... up to you
That's the complaint. Have fun trying to get attractive and reprodicible headless CI builds. I still have some issue with font rendering using latest version of their cli.
Hi Jeremy, congratulations for the launch and the website looks very nice indeed.
I am honestly mostly interested in your reason, to mix HTML/CSS generation into the Python code. Disclaimer, I am very biased towards separation of concern and like my backend just returning JSON/XML/whatever data and a templating system.
Of course this increases the ramp-up time to learn a framework, but then it is IMHO very powerful, flexible and fast.
Could you perhaps elaborate on your choice for FastHTML and what tradeoffs you see?
I'm a big fan of Locality of Behavior (LoB): https://htmx.org/essays/locality-of-behaviour/ . I don't think this need be incompatible with SoC. But even if you did think so, I believe that it's better to have everything in one language as much as possible, with the simplest possible specification of marshalling over network boundaries. My view is that hypermedia is a better way to do both of these things.
(I think HTML templating is a historical accident for what it's worth, and I hope it dies.)
> (I think HTML templating is a historical accident for what it's worth, and I hope it dies.)
HTML templating does have one very nice benefit though: there’s a seamless path between designing and iterating on a static HTML template (which renders in the browser) and then sprinkling in the dynamic bits on top of that.
If you start with fairly complex markup in the initial design, I’m imagining it could be tedious to rewrite the whole thing in Python. Or is there some tooling that can help with this?
FastHTML has a NotStr(X) component that renders X as HTML.
I just copied a big HTML Tailwind component to a NotStr() and it worked fine.
I then split it in two, before and after, so I could make the dynamic bit from natural FastHTML components and it worked fine returning Div(before, dynamic_parts, after).
Plan to convert most of my smaller websites to FastHTML in the next few days before it's much more enjoyable for me.
I am not too very happy that we need at least CSS/HTML/Javascript (ok, HTMX...) for web applications and would love to have a simpler tech stack.
For me, the biggest concern is CSS/HTML/JavaScript do not go away and it seems to me, when I choose FastHTML I still need a descent understanding of these AND need to understand how FastHTML transforms Python code on top of it. Templates show me mostly what I will get once they are rendered, which means less mental work for me.
Templating w/o embedded logic like Mustache are acceptable for me and I found good use cases for them. Once templating systems become obviously Turing Complete I see a problem. ;-)
FastTags (FT) are a 1:1 mapping to HTML. It takes ~5 mins to learn. There's no transformation other than that the function name is the tag, the positional args are children, and the kwargs are attributes. (Oh and we have to rename `for` and `class` since they're reserved words.)
I understand your reticence, because there have been a great many similar-looking projects over the years that create abstractions over the foundations. This isn't one of them -- it's a direct simple mapping.
Inspired somewhat by the comments in this post, I yesterday just dove into rewriting all the Golang templates for my project in Gomponents, a golang HTML builder that seems at least superficially similar to the model you're using here.
So far, there are ways in which it's definitely nicer to build things with an actual programming language, to have proper function signatures and types, to be able to easily break things down into composable bits.
But it also certainly seems to me at least to obscure the overall HTML code structure, compared to what I had in the templates. Maybe that will change somewhat as I get used to "reading" the new system, but the very fact that it's now much easier to compose things means that the overall structure won't be in one place any more. Just one of the trade-offs of a system like this.
IMO it's not a "structure in one place" vs "more composable so more spread out"
In HTML, you are much more likely to have things in one place because you don't have great options otherwise.
In Python, you can choose to keep things in one place and not compose them, or you can choose to refactor to compose them if that makes them better for your particular use.
It is, however, definitely true that with the extra option, you have the option to refactor it so much it's less readable. How much to refactor and split things apart, decentralize, DRY vs how much to keep things in a structured place can be a hard thing to figure out!
My impression having done Django for over 15 years is that FastHTML allows for separation of concerns, albeit not within templates. Rather, most of the "presentation layer" is executed during the return statement. A common pattern in people building non-tiny FastHTML projects is to break out presentation into a components layer and business logic into a business layer.
Often we see "components.py" for presentation and "content|logic|models.py" broken out for business logic. You can see this pattern done in my as-yet-DNS-switched blog here: https://github.com/pydanny/daniel-blog-fasthtml
Of course, it's still early in the project, it's going to be interesting to see what patterns emerge over time. :-)
Second, getting everything from flatpak would be a good idea, if the software I need would be available as certified flatpaks. Downloading random flatpaks is IMHO the same as downloading random executables.
Third, the AARCH64 version is not distributed anymore (this was the version I tried/used), AFAIK because the initial install script could not download the non-existent Firefox for AARCH64 flatpak (thanks Mozilla).
In the end I still like the idea of Aeon and hope they change their positions concerning firewalls. Points two and three are obviously not Aeons to fix, so I hope we as a community (and Mozilla) get there in time.
I use a MicroOS + wayland + sway and friends setup, since I don’t like big DEs. I completely agree with you about needing a firewall, but it was an easy fix to continue taking advantage of all the good parts:
sudo transactional-update shell
zypper in firewalld
[setup as you like]
exit
sudo reboot
Now you have MicroOS or Aeon as you like it. It’s a discouraged practice, but if you stick to the default repos and well-used packages, you can definitely tweak the opinionated defaults without compromising the vision.
I understand what you're saying, and I understand the maintainer's POV. But, nothing prevents you from installing firewalld, right? It should just work.
i agree with the flatpak, sadly this will probably not change.
BUT! if you are a developer, you could run with distrobox graphically applications from the distrobox from OpenSUSE!
I am starting to use this on tumbleweed and there are even "exporters" so the app in the distrobox will be exported to your application menu on the metal!
Does that app have access to a full terminal ? Like if I install vs code inside distrobox, will it have access to the systems or the container terminal?
the firewall question is interesting. I guess I understand their perspective: If nothing is listening/running then what’s the point of the firewall? The system is immutable so the security posture is a known quantity and cannot change at runtime.
You could argue that running an additional firewall service would actually be increasing the attack surface, in the sense that more code is worse than the absence of code.
Not sure I agree with their stance, but good on them for having the courage to revisit some our default assumptions. Some decisions will work out and others they’ll have to finetune.
The base system does not need a Firewall, according to them, and they might be correct about that or not.
IMHO the point of having a firewall which simply denies all incoming connections is, that once a user starts installing a few programs, sooner or later some of them might open ports, even w/o malicious intent.
If they want to provide an easy to use and secure system, IMHO there should be a firewall and each port has to be opened explicitly.
In the end, this is really down to opinion and there is no objective true answer, so I'd rather use Fedora-Atomic if I need immutability.
I can see where the no firewall argument is coming from and definitely on my own Linux laptop, I try and keep the number of ports listening down as much as possible, but it is tricky and it requires a lot of vigilance as sometimes applications you wouldn't expect to, will start services. Things like Spotify and Steam can open ports.
So having a firewall running can provide a bit of extra protection in case you don't always check to see what ports you have open/listening.
What's the benefit to a traditional consumer application of opening ports these days, besides maybe for local network data exchange (which, I assume, is what Steam does since I know it will sync game updates between machines on the same subnet). I would hazard that the global number of laptops and desktops with public IPv4 addresses in 2024 is probably in the triple digits, given that basically every provider I'm aware of hands off with a "Wi-Fi modem" that converts whatever weird delivery medium (fiber, DSL, cable, etc.), gets its own maybe-public IP, and does NAT for clients. Hell, I don't even have my own IPv4 address since Starlink does CGNAT.
Linux desktop user here, and you ask a very good question which frustrates me to no end.
You are 100% correct, everything which is executed with your login user users rights, can happily upload everything from your home to some random servers or exfiltrate information in other ways.
That Open Source solves the problem because the source code is available shows a blatant misunderstanding of the software supply chain, software supply chain attacks and the economics of Open Source. (e.g. the code you see on gitlab or github does not have to have any resemblance to the code your binary was build with, even if the code you see was used you don't know about compiler backdoors etc.) Further, all Open Source projects/distributions are understaffed and bad payed unless we speak about server Linux which is another story.
Why does Linux not keep up? IMHO several reasons
- There is no money in Linux desktops (Seeing how MacOS/Windows try to upsell users with every second click should give you an idea how bad the situation with Linux market share is
- People with the security and programming skills are quite senior and in high demand of companies (or running OpenBSD :-P)
- The Linux community is one of the most toxic/opinionated community in IT, so every Open Source initiative will have to fight for years an uphill battle (Look at Flatpak/SNAP, all the discussions/misunderstandings and crazy ideas people have)
- For servers we have SELinux and AppArmor, which could in theory also work for Linux desktops, but even for a security affine Linux user like myself it is too cumbersome/complicated to setup/maintain, outside of enabling it for browsers and other highly exposed programs
- Whenever I submit a patch/PR for an Open Source project which affects me on Debian, the maintainers literally beg me to take care of maintenance or at least Debian packages. I cannot even understand how illusional people are who want every software on earth as a native package for their variant of Linux of the week. We are far beyond the breaking point and the only people not seeing this are not contributing to Open Source, but complaining very loud and visible in online forums. Flatpak/SNAP are the only solutions we have if you want up to date software which was build from the original authors. (Remember the time a Debian maintainer broke SSH keys? Yeah, I do.) Especially security relevant software must be build by the authors and not some random maintainers who might or might not have a clue what they are doing.
There might still be a bright future for Linux desktop security, IMHO it depends on immutable systems/flatpak/SNAP. Given the current adaption rate and BS discussions, I expect around 2040 we will be there with a broken implementation and not ready for next generations exploits. :-P
I might be able to add several points to this discussion:
- For me, caffeine has really bad impact on my sleep quality, and it is an accumulative effect: After being abstinent for 1-2 weeks, I can drink 2 cups of coffee in the morning, and I will have good sleep. After 1-3 days of daily 2 cups of coffee, my sleep quality goes down
- Most research about safety of coffee is done with young, healthy adults w/o any medical preconditions. Which translates to, everything found in the research is objectively true, still it totally cannot apply to your, your health and life circumstances
- Finally, I observe people around me which can tolerate higher amounts of caffeine w/o any side effects. To the best of my knowledge there are very individual tolerance levels for caffeine and also very individual metabolization speeds
tldr:
- caffeine is known to increase anxiety, heart rate and disturb sleep
- YMMV and it even might vary depending on your age, phase of life, other activities
- if you observe feeling stressed, problems with sleeping or winding down, try to avoid caffeine/alcohol/etc. for a few weeks and observe if you feel better
Sadly I totally agree: Open Source is the playground of people who can afford it.
I benefited a lot of Open Source in my career, life so I am very thankful for all contributors (and try to give back in money/time, when I can afford one or the other).
What really annoys me, that my government does not mandate that software build with tax money must be Open Source.
That would go a long way to fund Open Source and improve the quality.
(I wasn't trying to make a point.)
As far as I know, that's what the initiatives like PMPC¹ are for.
I think in Switzerland, a law recently passed that seems to go in that direction² (Open Source should by default but some leniency as far as I can interpret the text).
According to this³ OSOR report, something similar happened in Italy in 2019.
So, I think we're slowly going in that direction in Europe.
When a project is about non trivial projects, people, especially developers can not be easily replaced, because no one can replace a specialist with several years of experience on the spot. What I can archive in one day and what somebody else can archive on a day highly depends on what it is. Write a field to the database? Works. Write a rule engine? I'll probably be a factor of 10x more productive than an average developer. Write some GUI/CSS? A frontend developer is 10x more productive than me.