> If you can't make that decision, are you really the EM?
You'd be served well as an EM by this part of the Serenity Prayer:
"God, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference."
Depending on your organization, odds are high that AI use is one of the things you cannot change. Perhaps not even something you're ought to change. If your team is delivering x% more, "it makes my job x% more difficult so don't do that" won't fly neither upwards nor downwards.
> If AI produces code that no one knows and is hard to maintain
I think you're making an assumption here that the main problem with AI use is necessarily quality.
OP wasn't even talking about AI producing bad code, just that it creating more code, and enabling more things to happen. More things going on at the same time, means you'd have more friction points and more things that can go wrong. Whenever those happen, the EM is pulled in.
People really look through rose-colored glasses when they talk about late 90s, early 2000s or whenever is their "back then" when they talk about everything being simpler.
Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.
Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.
Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.
It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.
> Everything was for sure simpler, but also the requirements and expectations were much, much lower.
This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...
- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields
- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.
- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.
But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.
I worked at an ISP in 1999 and between 8-11 PM we would simply disconnect the longest connected user once the phone banks were full. Obviously we oversubscribed.
> I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.
So well put.
LLMs are useful for a great many things. It's just that being the best new product of the recent years, maybe even defining a decade doesn't cut it. It has to be the century-defining, world-ending, FOMO-inducing massive thing to put Skynet to shame and justify investments in trillion dollars. It's either AI joining the workforce soon, or Nvidia and OpenAI aren't that valuable.
I guess it manages to maximize shareholder value, and make AI feel like a disappointment.
> Do you account for frequency and variety of wakeups here?
Yes. In my career I've dealt with way more failures due to unnecessary distributed systems (that could have been one big bare-metal box) rather than hardware failures.
You can never eliminate wake-ups, but I find bare-metal systems to have much less moving parts means you eliminate a whole bunch of failure scenarios so you're only left with actual hardware failure (and HW is pretty reliable nowadays).
If this isn't the truth. I just spent several weeks, on and off, debugging a remote hosted build system tool thingy because it was in turn made of at least 50 different microservice type systems and it was breaking in the middle of two of them.
There was, I have to admit, a log message that explained the problem... once I could find the specific log message and understand the 45 steps in the chain that got to that spot.
> I'd say also that you should never purchase Apple gift cards from anyone except Apple directly
This would be a good measure assuming we’ve fully discovered all the reasons Apple might ban you for, and only reason happens to be gift cards.
Since we don’t know what other seemingly trivial actions may provoke Apple to wipe an account, I think starting a developer conference is the only way to be safe.
Why not just ban the user from using gift cards then, instead of banning their entire account between 30 different products under the same company umbrella?
They don’t need to fix insecurity of gift cards, they just need better access controls. Yet they have no incentive right now to tackle that.
Reading 4-day week futurism while working 5 days as you always did, hoping it doesn't get to 6.
This one and UBI are the two classics of 2000s optimism and naivety.
reply