Brass tacks, if an institution has an overwhelming political leaning toward faction X and works to undermine faction Y, is it really surprising that when Y gets into power it attempts to damage the institution? This is precisely why publicly funded institutions should maintain agnostic political posture.
What fantasy world do you live in? I want to be there, the world I'm everything granted to the public is always under constant attack and threatened to be destroyed and their proponents destroy and their benefactors humiliated.
I had the opposite experience. Opus 4.6 extended feels like the first genuinely intelligent model to converse with, Opus 4.7 adaptive feels like slightly smarter LinkedIn slop.
In a city not only does it do random things, when it does work it’s calibrated so poorly people behind me signal all the time because it’s too slow.
On a freeway it’s only kind of usable. It switches lanes far too aggressively and for no reason, to the point that it makes the ride uncomfortable.
What I really want is auto steer with lane switching when I signal, which for some reason I could never get working in any mode. It either doesn’t change lanes at all, or changes them arbitrarily of its own volition. And if I change lanes manually it turns off autosteer, which is too irritating to use in practice.
Tesla self driving, in any mode, is a bad product. And I say this as a Tesla fan.
A view from my small corner on the inside: taste isn't merely not incentivized, it's actively disincentivized. It's not selected for during the interview process, if you demonstrate a little of it nobody cares, if you demonstrate too much of it you clash with everyone else's priorities which quickly becomes career limiting. So people willing to fight for taste never advance.
This isn't some nefarious plot to screw over users. Taste is not prioritized because nobody has it and thus can't recognize it. Can't value something you don't even recognize. This is orthogonal to talent btw. Lots of people there who are insanely good at what they do, who produce the most hideous API specs you've ever seen, as one example.
A much more mundane (and almost certainly true) explanation is that people who put all that crap in legitimately thought it's a good idea. Taste is its own thing and it's just not in Microsoft's DNA.
A model that gets good at computer use can be plugged in anywhere you have a human. A model that gets good at API use cannot. From the standpoint of diffusion into the economy/labor market, computer use is much higher value.
I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
> it cannot allow private companies to control the use of military equipment.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.
Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".
If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.
I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
reply