Paleontology has been really helped by the ease of sequencing, to the point where many evolutionary arguments are moot. Humans are apes, birds are dinosaurs. Some people still dispute it, but not with evidence on their side.
Phylogenetics is amazing, given surviving members of a clade we can reconstruct the ancestors. Phylogeny techniques can use additional info, eg paleontological record.
That just shows they diverged from crocodilians after that clade diverged from turtles. It doesn't show birds and dinosaurs are more closely related to each other than birds are to crocodilians (but the fossil evidence shows that.)
That applies to pretty much any reasonably complex idea. A new system requires effort to understand it. When you've expended that effort, it's not complicated anymore.
I don't understand this sentiment—as if learning IPv4 was enough work on your part, and now you're entitled to networking protocols never changing anymore.
Just as much as people are not entitled to lack of change, they are not obligated to enjoy, welcome or facilitate change.
What I learned about IPv4 at the turn of the century allows me to comfortably plan and manage networks up to a few thousand nodes, maybe a few tens of thousands.
I don't work in networking anymore. I really don't care about what those who are in that business. What you need to manage contemporary billion-node size networks and interchange between them is not my problem. You try to make it my problem, but I don't care.
I'll continue organizing the very few and very small networks that are still my responsibility using pre-CIDR ideas.
Maybe it becomes impossible some day. I'll deal with it then.
Not much more complicated than IPv4. There are more bits. The addresses are longer. It's not hard to grasp if you understand the prerequisites to understanding networking in general.
The idea that it’s just “more bits” it’s wrong, so I’m not sure your assessment is valid. Maybe at the packet level it’s just “more bits”, but at the network level a lot of processes changed. IP assignment, router discovery, etc. are different.
We need models that are smarter than humans. So far, the cost of an AI query + training is dwarfing the effort it would take to teach an intelligent human how to do a task. We are dumping an incredibly amount of money/effort into making AI do stuff when it's still not competitive with humans, because dumbass people are controlling investment. The stock market is not a replacement for competent investment. The fact people buy meme coins shows how fucked we are.
Deceiving people is not a sustainable business model, but it is the most prominent one in the US right now. Lie to the public, sell them stuff that's bad for them at too high of a price, get rich quick, then act confused when your economy collapses because the victims of your grift can't spend anymore.
> There is absolutely no reason why software today has to be written like software of yesterday.
I get what you're saying, but the irony is that AI tools have sort of frozen the state of the art of software development in time. There is now less incentive to innovate on language design, code style, patterns, etc., when it goes outside the range of what an LLM has been trained on and will produce.
Personally I am experimenting with a lot more data-driven, declarative, correct-by-construction work by default now.
AI handles the polyglot grunt work, which frees you to experiment above the language layer.
I have a dimensional analysis typing metacompiler that enforces physical unit coherence (length + time = compile error) across 25 languages. 23,000 lines of declarative test specs compile down to language-specific validation suites. The LLM shits out templates; it never touches the architecture.
We are still at very very early days.
Specs for my hobby physical types metacompiler tests:
LLMs aren't like a screwdriver at all, the analogy doesn't work. I think I was clear. LLMs aren't useful outside the domain of what they were trained on. They are copycats. To really innovate on software design means going outside what has been done before, which an LLM won't help you do.
No, you weren't clear, nor are you correct: you shared FUD about something it seems you have not tried, because testing your claims with a recent agentic system would dispel them.
I've had great success teaching Claude Code use DSLs I've created in my research. Trivially, it has never seen exactly these DSLs before -- yet it has correctly created complex programs using those DSLs, and indeed -- they work!
Have you had frontier agents work on programs in "esoteric" (unpopular) languages (pick: Zig, Haskell, Lisp, Elixir, etc)?
I don't see clarity, and I'm not sure if you've tried any of your claims for real.
reply