This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
Chillax Palantir, your pro-surveillance throwaway incidentally makes such large data harvesting companies a larger target.
Limiting data retention doesn't mean hiding bad things, it means limiting exposure in general. The more of a thing - anything - that you have, the bigger a target you are to bad actors. By extension, companies holding vast sums of data beyond what's needed to process a given transaction or remain compliant with the law end up placing themselves at risk of being targeted and said data used as leverage against them.
You don't limit data to hide bad shit you're doing, you limit it to avoid others using it to do bad shit against you or your customers. If someone or something is engaged in bad shit, there will always be evidence somewhere regardless of data retention policies.
Probably nothing, he's just not naive. You would have to have the intelligence of a small child to legitimately believe that authorities are only ever acting in benevolence, never with ulterior motives, and that they can never make mistakes. It's a matter of risk analysis here; we want to minimize the risk of shit going wrong.
Not an MBA but have dealt with licensing throughout my career, so I’ll try taking a whack at it:
Under the prior Enterprise Agreement structure, Microsoft would basically sell licenses to channel partners at decreasing costs based on KPIs like volume. This works for physical goods where big vendors get bigger discounts for bigger volume commitments, but leaves a lot of money on the table for software vendors while making it difficult for channel partners to compete with established players (who in turn can bully software makers into more lucrative terms).
So Microsoft - or the author, rather - moved to the Software Assurance model: everyone fits into the same tiers depending on size, and everyone gets the same margins. This changes incentives to reward bundling, multi-year deals, and broader portfolios of software instead of just straight volume. Putting everyone on equal footing for comp also incentivizes services - MSPs, consulting, architecture, etc - which then also feeds into the original incentives of growing multi-year deals and broader portfolio adoption, hence the “Perpetual Motion Machine” comment attributed to Ballmer.
Except Microsoft now feels they’re such the dominant player in the market that they can handle billing outright, relegating partners solely to advisors and consultants in an era where Microsoft sells the very services partners used to make bank on. This is cutting out the middleman (channel partners), but also exposed Microsoft to a litany of government regulation as a result. This is because the SA model concentrates pricing in Microsoft’s hands, and thus gives them outsized power and influence in the market.
That’s my understanding as an outsider though; I fully admit I am likely wrong on some points that OP might be able to clarify or correct.
> Except Microsoft now feels they’re such the dominant player in the market that they can handle billing outright, relegating partners solely to advisors and consultants in an era where Microsoft sells the very services partners used to make bank on. This is cutting out the middleman (channel partners), but also exposed Microsoft to a litany of government regulation as a result. This is because the SA model concentrates pricing in Microsoft’s hands, and thus gives them outsized power and influence in the market.
This seems incompatible with the description in the article:
>> The model split the EA channel into three tiers covering 75,000+ addressable accounts and an $11.5B opportunity envelope: 1,150 Microsoft-led global strategic accounts at a 4% ESA fee, 14,000 channel-assisted corporate accounts at 9%, and 60,000 channel-led medium enterprise accounts at 15%. Microsoft billed the customer directly across all three tiers. What changed was who led the sale, what role the partner played, and how the partner got paid. The channel was converting from a margin model, where partners set end price through discounts, to an advisory fee model, where Microsoft set price and partners earned fees for services delivered. An ESA was required on every deal.
Emphasis mine in both cases.
That said, I can't really be sure what's going on, because the author hasn't bothered to explain any of it. There is clearly some set of material that he assumes I know, but he hasn't even stated what that is.
Same. The fact they're shoving AI into it and expanding it to providers who don't have privacy as a guiding principle is a key reason I'm sitting on a 14 Pro still, and why I'm exploring local alternatives with Home Assistant.
Besides, we just need to set verbal timers and control music. We don't need a full-blown verbal Oracle.
Home Assistant is indeed quite nice and relatively simple to set up with the Docker images provided by the team. Device setup on iOS was a little inconsistent, but has been rock solid for over a year. Check out Homebridge as well. I run both.
I ought to take a break from my Docker Compose work and get back to migrating off Homekit and into Home Assistant. The Home Assistant Yellow has been a real champ thus far, and once it’s set I can then tie the Unfolded Circle 3 into it for better control.
What value do you get out of Home Assistant you don't from HomeBridge? I use HomeBridge for a few devices, my Windmill AC, some Govee lights, and previously my Ikea smart lights (Tradfri, but now Dirigera supports HomeKit).
Not everything in life is a threat model, y’know; oftentimes it’s just personal preference.
I prefer to read reference material and do research instead of asking chatbots, for instance, because it helps the material stick better and enables me to make broader connections to disparate pieces of knowledge.
I also prefer technology to be narrow in scope and function, so I can spend more time enjoying life and less time troubleshooting why some needless complexity has failed again. This extends to voice assistants that consistently fumble on accents and grammar when asking for more complex queries, and often want to send data out of my LAN to some random server I have no control over just to process something that could be done on any of the myriad of GPUs and CPUs in my home instead.
Despite the EULA, TOS, and Privacy Polices governing these interactions, I intrinsically don’t trust a relationship that requires revalidation of those policies every time an update is pushed, whose changes fail to be summarized, and which force me into hostile relationships with the vendors. I also generally believe that as live services, there is no sufficient incentive for security or privacy but ample incentive for data mining and prolonged/frequent interactivity. Repeated incidents of supposedly “anonymous” and “private” conversations or data being inappropriately disclosed or compromised do not help lend any sense of security to said services, at least to me. Then you consider the wider economic environment prioritizing immediate gains over sustainable business practices, and my own personal preference for building and nurturing long-term infrastructure to solve my problems on a consistent basis, and it’s less a threat model and more just incompatibility between my personal needs and corporate goals.
What is your concern about prompts to go OpenAI? Apple has a contract with OpenAI that explicitly prevents them from logging, storing, training, or making any use of your prompts other than to satisfy the specific current request. Apple has some good lawyers and I’m sure that the teeth are prominent in that contract.
The person I was responding to had privacy concerns. The closest thing to a privacy concern about LLM usage on iOS is Apple Intelligence, which sends some prompts to OpenAI to fulfill them. Thank you for the information about Apple's privacy program.
I send hundreds of prompts to OpenAI's LLM daily. I do not have a concern about it.
Not to mention the fact that the default settings are to ask the user before sending anything to ChatGPT, and you can selectively disable just the ChatGPT integration while leaving Apple Intelligence enabled.
This genuinely wouldn't surprise me, and I need to go back to looking at balance sheets to see if I can sus out the validity of that narrative. As AI subsidization ends prematurely and costs skyrocket, we should expect to see those costs reflected in the operation statements of major customers.
Since I had Coinbase up for review already, I decided to peek there first for any sort of correlation. In 2023, their "Technology and Development" line item shows $1.32bn going out, and by 2025 it'd ballooned to $1.67bn. This is despite headcount actually contracting by almost a thousand people between those two statements, which would normally mean a smaller technology spend since a lot of corporate software is seat-based nowadays. This suggests that yeah, actually AI spend is creating a heavier drag on the balance sheets and it's being offset with layoffs since the "job replacement" narrative is strong. That said, I'd need to check dozens' more balance sheets to draw any sort of industry-wide conclusion.
And to factor in other infrastructure costs that's become more expensive too, such as hosting or hardware.
So unless you can isolate AI spending from others that's not going to be convincing.
...hence why I qualified the statement like I did. I'm well aware one example from one company in a budgetary line item that's inclusive of labor and licensing and hardware and purchases and AI is not going to be remotely conclusive on its face.
Yet even taking into account all of that data, a $300m jump in three years must include some significant and growing amount of AI spend; everything else would've contracted (licensing, hardware) stagnated (cloud consumption), or been a singular event (CAPEX purchases) relative to the company's health and headcount.
There's multiple simultaneous narratives: the industry-wide one of slashing well-paid tech talent under the guise of AI productivity boosts, and what's actually going in at each company.
Cloudflare is an outlier because the company doesn't actually make money at present; their past three annual statements show net losses in the tens to hundreds of millions of dollars. Not hemorrhaging cash per se (their cash reserves alone could cover ~9 more years of losses), but still enough to warrant some cutbacks - and AI is the current scapegoat, thus they finger AI and throw folks out the door.
Coinbase's story is different: they're making good money, but their industry is inherently volatile. Again, recent volatility in the crypto markets related to...things...is dragging down long-term prospects for currencies, while ongoing trades are broadly just insiders doing insider things or exiting their positions for liquidity. Still, their share price is down 27% over 5 years and 18% YTD, so they also need to pump their share price so the executives get paid; layoffs are consistently rewarded by the shareholders, thus they axe part of their workforce for the bump and fingerpoint to AI.
Never take what a company says at face value, and always check their balance sheets. What Cloudflare did sucks but could be warranted to some degree; what Coinbase did has no justification whatsoever beyond naked greed.
> Cloudflare is an outlier because the company doesn't actually make money at present; their past three annual statements show net losses in the tens to hundreds of millions of dollars.
Their free cashflow is high; they're choosing not to report a profit. I don't think it's useful/accurate to say they don't make money.
Don't get me wrong, they may be doing a layoff to boost margins or enter GAAP profitability but the company revenue exceeds its operating cost by quite a bit.
> First quarter revenue totaled $639.8 million, representing an increase of 34% year-over-year
So they're growing 34% annually.
> Free cash flow was $84.1 million, or 13% of revenue, compared to $52.9 million, or 11% of revenue, in the first quarter of 2025.
Cash, cash equivalents, and available-for-sale securities were $4,163.9 million as of March 31, 2026.
...and they have $84 million free cash flow in one quarter, and it's consistently pretty good cashflow.
And they have $4b of cash or cash equivalents stockpiled. It seems pretty healthy to me.
Its quite filthy but it benefits them all to lay off lots of people to reset the wage rate in the market... Im sure we will see a wave of re-hiring when this stuff starts to blow over but many initially will be at a much lower wage rate.
I'm going to start calling these "Canary" moments.
Assuming we take everything at face value for these sorts of cuts, it creates the following scenario:
A company finds itself with surplus labor capacity due to the efficiencies in AI while also posting substantial profit or revenue growth. The company could downsize the workforce to capitalize on short-term efficiencies and increase margins, though this will come at the cost of long-term reputational harm due to posted profits/health as well as burning out staff who must do the same (or increasingly, more) work with less headcount, leading to attrition when the market shifts in their favor. Alternatively, it could leverage this surplus labor for a period of moonshot R&D or paying down technical/process debts while they have the capacity and the profit to pay for it, which harms short-term share price relative to their competitors slashing jobs, while improving the company's capabilities in the marketplace in the long-run, potentially through mastery of these AI tools or the creation of new product lines.
The fact so many orgs opt for immediate greed over long-term growth really is its own canary that leadership and governance both has failed the marshmallow test.
"A company finds itself with surplus labor capacity due to the efficiencies in AI"
That is one possible interpretation, though I don't think it's supported by any facts.
A competing explanation: companies are spending a ton of money on AI in search of efficiency, and then laying people off in order to offset these investments. That's certainly what's been happening at Microsoft, Oracle, Meta, etc.
You can't really compare them to Microsoft, Oracle, or Meta. Those companies aren't cutting costs because AI replaced their own employees. They're pouring money into AI infrastructure and models because they want to sell that capacity to others.
Their thinking is more: instead of funding another internal product team, they can redirect that payroll spend into more AI compute and models they hope to monetize.
I don't believe CloudFlare is doing that, though they might, they could be needing to spend in Edge AI compute and what not, building out that infra isn't free, so they might need to find places the cash will come from.
AI is a fraction of cost of an employee though right? I have an 1000$/mo AI budget which is a fraction of my salary, and most people don’t hit their limits.
Sounds like your company is burning 1000 dollars a month for something people are barely using. At some point those costs become unbearable and they admit that absurd AI budget was a mistake, or they admit no mistake and fire people. I know which they'll choose.
Curious to know why are they not hitting their limits.
In the organization I work, things are crazy at the moment, we are drinking tokens as if we are in hot desert and 1k is barely enough for a week for some people
On heavy coding days it can go up. But most people aren’t coding all day. And research and docs tend to be pretty gentle on the tokens IME. Only time I hit my limit was coding for 80 hours in a war-room.
(Also I mostly use cursor which is more efficient with its implicit use of light models and indexed workspace).
Yeah, I wrote this before I dove into their balance sheets for another comment. Cloudflare’s cuts are more defensible than most, but the timing and explanation are shady given that they’ve had the same problems for years.
Turns out running a profitable business is really hard when all you've known was ZIRP.
Honestly think the business lessons from big tech over the last 20 years are hogwash, mostly due to them abusing their monopolies allowing them to subsidize failing BUs indefinitely.
37signals has a better approach to starting software companies, and many of their peers/near peers indicate that it's a better way to sustain lifestyle companies too.
Doesn't turn you into a billionaire tho, maybe that's a plus.
> A company finds itself with surplus labor capacity due to the efficiencies in AI
It's likely more:
A company finds itself with surplus labor capacity due to the over hiring during Covid, cutting down on risky ventures, protecting margins, and narrowing scope.
But I think there's also:
A company wants to see if AI is making them more efficient, decides to cut people as if it was and see what happens.
I also am not sure about the short term stock price, many recent mass layoffs the stock often moved down. The CloudFlare stock is tanking in after market for example.
If the market had been saturated then there wouldn't have been any (hypothetical) revenue growth which is what the comment above was arguing.
Personally I don't think there was any revenue growth to begin with. They are spending a lot on AI and haven't seen any ROI but for reasons they prefer to fire people and keep investing on AI.
That shouldn't apply to tech where there's generally always more market to capture and competitors looking to offer a better product and take your market share.
Excess labor would only translate to increased revenue and new products if these companies had a product vision to begin with. But they don't, so people get sacked.
You are on the right path, but I think you are off by a bit. Every company has more work they want to do than budget allows. However some of those things won't pay off fast enough. That is they have product vision but are smart enough to realize that those extra things they won't be able to do are not things customers are willing to pay extra for today.
Companies have finite budget to pursue these ideas, and never enough to fully pursue all of them simultaneously.
It's management's job to prioritize the order in which they're pursued, subject to available budget.
In the last 5 years, leadership at the Mag 7 has been bad at this core responsibility.
* Alphabet: failed to productize its AI research
* Amazon: completely ignoring the erosion of customer trust in its core logistics business driver (warehouse retail)
* Apple: Vision Pro and lack of product vision (outside of their microprocessor group)
* Meta: VR. Enough said
* Microsoft: Windows. Enough said
* Nvidia: Granted, probably the one standout, but they did get the golden ticket to own a shovel factory during a gold rush
* Tesla: Everything
Objective check: Mag 7 ex Nvidia only outperformed the S&P 500 by +17% over the last 5 years, in contrast to prior periods (and much of that thanks to Alphabet boosting the average)
> The fact so many orgs opt for immediate greed over long-term growth really is its own canary that leadership and governance both has failed the marshmallow test.
Why do you think it's greed? The company's stock is down and they just missed expectations on their last earnings report (unheard of in big tech in the last 2 years).
This was kind of my read as well. We are increasing our AI usage but not in a way that meaningfully affects our ability to deliver on our product roadmap, so the solution is to cut opex on people so we can devote more to compute. The last bit is obviously speculation but it doesn’t feel like a far leap.
My charitable company strategy take: this is companies skating to where they think the puck will be
Given the rapid progress in LLM capability in recent history, it's reasonable to expect that continues... at least to some degree.
Consequently, companies are going to need to continue to cut, and delaying those cuts will only leave them in a worse position.
Devil's advocate counterpoint: it's currently unclear where AI does and doesn't provide efficiency gains in a business, so some companies are making headcount reductions without knowing where they should target them
This is simply a symptom that the company doesn't have good Quality Control processes in place.
AI-produced code is good but it's not so good that it can replace hand-crafted (or heavily supervised) code written by the type of engineer who works at Cloudflare.
What's really happening is that a few employees realized they can game the system by turning on a firehose of AI slop and pushing 10x the LOC than any other engineer (with or without AI), because there's no one to tell them to stop, and in fact with a management that actively encourages this.
> What's really happening is that a few employees realized they can game the system by turning on a firehose of AI slop and pushing 10x the LOC than any other engineer (with or without AI)
Did they figure out how to game the system? Or was the system set up exactly with incitaments to produce exactly this outcome?
They figured out how. Mind you the system was setup with incentives to produce this outcome - but before AI it wasn't really realistic to produce all those lines of code even though you could and so nobody was gaming it so badly it broke. (it was always broke, but the breakage was acceptable before)
Damn. Phenomenal read. Just a really excellent piece of prose in its own right, topic be damned.
Yet the topic is also what makes it so good. It's written by someone who has also seen the vastness of impact technology has had, who has a firm grasp of the difference between technology and industry. Someone who knows the technology didn't get people addicted to social media and short-form videos and click-bait headlines and microtransactions, it was the industry that consciously chose greed and harm.
I love technology, and I'll keep wielding and mastering it until I'm dead in the ground. It's the industry aspect that I'm increasingly dissatisfied and disillusioned with.
Oh my god I finally get a very specific Harvey Birdman joke as a result of this factoid. Fuck me, Phil Ken Sebben as a parody of Ted Turner kinda works.
That I was aware of. I'm more familiar with his media and wildlife conservation efforts than his business acumen or sports achievements. Captain Planet, Turner Classic Movies, Hanna-Barbera, Cartoon Network, etc.
The bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.
That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.
The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.
That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
Never experienced this at a job in 30+ years, and that includes my first jobs in fast food. If you experience this at work, find another job. This isn't normal. It's extremely dysfunctional in fact.
I was already looking, but they ultimately made the decision for me in January with a RIF.
Thing is, this job market is hell. There are folks who have to choose between the abuse or making rent, which is why we need stronger incentives for organizations to discipline said abuse rather than let it permeate because existing penalties lack teeth.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
reply