Hacker Newsnew | past | comments | ask | show | jobs | submit | coffeemug's commentslogin

Brass tacks, if an institution has an overwhelming political leaning toward faction X and works to undermine faction Y, is it really surprising that when Y gets into power it attempts to damage the institution? This is precisely why publicly funded institutions should maintain agnostic political posture.

What fantasy world do you live in? I want to be there, the world I'm everything granted to the public is always under constant attack and threatened to be destroyed and their proponents destroy and their benefactors humiliated.

How do you do this when belief in science, which is important to academic institutions, is unpopular with one faction?

When it no longer becomes science and becomes social. There are MANY examples, even in this thread, of this happening.

When and where did this happen before a year and half ago?

I had the opposite experience. Opus 4.6 extended feels like the first genuinely intelligent model to converse with, Opus 4.7 adaptive feels like slightly smarter LinkedIn slop.


In a city not only does it do random things, when it does work it’s calibrated so poorly people behind me signal all the time because it’s too slow.

On a freeway it’s only kind of usable. It switches lanes far too aggressively and for no reason, to the point that it makes the ride uncomfortable.

What I really want is auto steer with lane switching when I signal, which for some reason I could never get working in any mode. It either doesn’t change lanes at all, or changes them arbitrarily of its own volition. And if I change lanes manually it turns off autosteer, which is too irritating to use in practice.

Tesla self driving, in any mode, is a bad product. And I say this as a Tesla fan.


A view from my small corner on the inside: taste isn't merely not incentivized, it's actively disincentivized. It's not selected for during the interview process, if you demonstrate a little of it nobody cares, if you demonstrate too much of it you clash with everyone else's priorities which quickly becomes career limiting. So people willing to fight for taste never advance.

This isn't some nefarious plot to screw over users. Taste is not prioritized because nobody has it and thus can't recognize it. Can't value something you don't even recognize. This is orthogonal to talent btw. Lots of people there who are insanely good at what they do, who produce the most hideous API specs you've ever seen, as one example.

A much more mundane (and almost certainly true) explanation is that people who put all that crap in legitimately thought it's a good idea. Taste is its own thing and it's just not in Microsoft's DNA.


> Now I have no illusions about who looked stupid and who were stupid.

Could you expand on what you mean?


Stupid/smart are social construct and illusions.

The reality is different.


Incredible letters, thanks for sharing. I wish some of this correspondence was published in physical books. What a joy it would be to read.


A model that gets good at computer use can be plugged in anywhere you have a human. A model that gets good at API use cannot. From the standpoint of diffusion into the economy/labor market, computer use is much higher value.


Why do you need to keep up? Just use the latest models and don't worry about it.


I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.

What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.


> it cannot allow private companies to control the use of military equipment.

The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."

We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."

> Surely autonomous murderous robots is something U.S. government has interest in preventing

After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.


So what your saying is it should be removed from the military supply chain?


i dont think any of the big ai companies or any of the sota models should be in a kill chain

i as a foreign citizen get to have hard to detect influence over the model because it scraped tons and tons of my internet comments.

if youre going to have a supply chain, it needs to include where the trainjng data is sourced from and who can contribute to it


No, he's saying if this was such a big deal why did they sign up in the first place?


Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".


A vendor doesn't want to do something you need, you find another vendor (there are others).

This is just petty.


If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.


I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.

> What I don’t understand is why the two parties couldn’t reach agreement.

Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.


> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.

Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.


Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: