Hacker Newsnew | past | comments | ask | show | jobs | submit | galbar's commentslogin

The activity that needed and still needs to be protected is problem solving:

- Understanding the problem at hand

- Putting all the pieces together so that they solve the right problem the right way

- Making sure that the solution facilitate future extension and doesn't lead to a ball of mud two months from now... Unless stakeholders want it to be quick and dirty, then making sure they understand the costs/risks

- Planning execution a way that is incremental and testable so that we can build confidence that the system is doing what we expect of it

- if you are in a team, figuring out common dependencies so that those can be done first and unblock parallelism on execution.

Once all that is done and documented, writing the code was easy and fast.

What would sometimes happen is that some unexpected detail or dependency would be discovered as part of the writing of the code and then you are back at the beginning, figuring out how to make everything fit together.

I find that the main confusion comes from people not realizing that those are two different activities and instead calling it all "writing code".


> Understanding the problem at hand

I think one of the things that AI is uncovering is how bad many programmers were/are at this. Sure , they may understand by ref vs. by val, but they can’t or won’t take the time to really understand what needs to be built.

I’ve said for a long time that coding is the easy part, it’s understanding what needs to be built that’s hard. AI has now come along and born that out.


I was just thinking about forge federation this morning. It'd be nice to base the federation on email, which has been working fine for decades (boring tech and all that), and build UIs on top of it to facilitate collaboration.


These projects quickly reach a point where evolving it further is too costly and risky. To the point that the org owning it will choose to stop development to do a re-implementation which, despite being a very costly and risky endeavor, ends up being a the better choice.

This is a very costly way of developing software.


It's easy to say that organizations should do it right the first time, in terms of applying proper engineering practices. But they often didn't have the time, capital, and skillset to do that. Not ideal, but that's often how things work in the real world and it will never change.


Organizations should do it not catastrophically wrongly, especially once a core design / concept is mostly solidified. Putting a little time into reliability and guardrails prevents a huge amount of downside.

I've been at organizations that don't think engineers should write tests because it takes too much time and slows them down...


It is my understanding that a lot of EU governments are setting up their own matrix servers.


This is true. We just published a map of it: https://element.io/en/matrix-in-europe


Clicking through and stumbling upon Croatia, which specifies only "Classified deployment", has left me absolutely cackeling. Seems hilarious that they're willing to say that they use it, but unwilling to state if it's for early testing, civilian-level beaurocracy, or Croatia's equivalent of specialized armed forces.

That they publicly use it at all is great though, as it likely helps shift the Overton window of what's normal, and what fits standard useage of Matrix-Synapse


I hope they don't, considering Matrix's handling of security is on the level of a bumbling toddler.


If you're talking about https://matrix.org/blog/2026/02/analysis-of-reported-issues-..., I'm not entirely sure that characterisation is accurate :)


It's more that they haven't gone public with it yet, and it's not for us to out them :)


Question, with so many major orgs using it, are there no plans for manual status? The one thing I miss vis-a-vis teams is the ability to manually set myself away, appear offline, busy etc.

Matrix shows me as active (green dot) when I have the client open but there's no way to override that. At least none that I found. I'm a bit surprised all these big governmental clients didn't ask for such a feature :)


There's a big gap between lots of orgs using it, and lots of orgs paying for development of it. That said, BWI in Germany is currently funding custom status so it should be coming soon :)


Ohh nice to hear it's coming.

But sorry that they are not contributing. That's pretty bad tbh.


FWIW, I have managed 10 simultaneous live transcoded streams on a ARC B580 and it could have managed a few more. With couple of them you cold be fine.

The other aspect is you could share the media storage over NFS and have multiple jellyfin instances running for different houeshold groups.

With 2 or 3 nodes like that I think you could make it work.


It's not a good look to break userspace applications without a deprecation period where both old and new solutions exist, allowing for a transition period.


Software is applied mathematics, though


And still not applied physics


But not across the Pyrenees :_)


>The fix isn't "apply traditional methods"

I would argue they are. Those traditional methods aim at keeping complexity low so that reading code is easier and requires less effort, which accelerates code review.


This article describes the body of knowledge I was taught when I joined the industry and parallels with my experience with and thoughts about AI.

I have come to the realization that most people in the industry don't know this body of knowledge, or even that it exists.

I'm now seeing the same people trying to solve their ineffectiveness with AI.

I don't know what to think about this situation. My intuition hints at it not being good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: