This is hardly surprising given that OpenAI already signed a military contract. [0]
Where was the open letter then? No-one cared.
Recently, Claude was already used by the administration for the operation in Venezuela [1] alongside Palantir. Anthropic did nothing at the time and again...
No-one cared.
Now everyone cares when Anthropic finally said No? The decision for the contract was already predetermined for OpenAI, even with the open letter.
So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?
Either way, it seems that OpenAI and Anthropic were all OK with the US government using their models for warfare so really there is no point in defending both of them or even the employees who knew beforehand.
>Now everyone cares when Anthropic finally said No?
DoD started asking for the ability to do more stuff. That's the issue here.
"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance... Fully autonomous weapons." https://www.anthropic.com/news/statement-department-of-war
>So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?
again, this story isn't about people that are against any military contract.
> DoD started asking for the ability to do more stuff. That's the issue here.
The DoD (NSA) has crossed those lines before with big tech when they allowed mass domestic surveillance on their own citizens (PRISIM) in the past. They are willing to break laws to get the job done which probably in 10 years time those actions would be found illegal later. By then it would be too late.
Companies should expect that governments may try to bend or even break rules or laws under veiled pretenses.
So why expect that they would be any different today? It is the nature of governments.
Scorpion (DoD) tells frog (Anthropic) to help them cross the river on the condition that they won't kill them. If you knew that the Scorpion always breaches the contract first, why work with them in the first place?
> again, this story isn't about people that are against any military contract.
Gen AI mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.
Given all the above, the clear indication was OpenAI already signed a military contract last year. Why didn't the employees and insiders make an open letter pledging that AI should not be used for mass surveillance back then?
>mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.
some companies do it, so that means anthropic has to do it? this seems like nihilism
> some companies do it, so that means anthropic has to do it?
Then I would have expected open letters from anon OpenAI employees from the very beginning in 2025 when those military contracts were signed as a reassurance / pledge around those certain boundaries since they ultimately knew. But of course, only after Anthropic rejected the DoW in 2026. Very late for that.
Anthropic should've known about PRISIM and the nature of governments and should have not tested the benefit of the doubt in the first place since it's directly incompatible with their 'principles'. Otherwise none of this would have happened.
Their first mistake was trusting the government (especially this one) and like almost all of them, they are ready to test what they can get away with and breach the contract first.
Anthropic naively expected that the administration (and any other) of this time would change for the better today. Their lesson is that they should never trust governments to agree to their contracts.
I think people that aren’t objecting to AI mass surveillance of populations: haven’t recognized how thorough and invasive these technologies will become; think the current governments share their values and lists of enemies; naively think government priorities will never change, and that scopes will never increase.
How sure are we that something fishy isn't going on with the models and the alignment research teams and the answers the model is giving? Like maybe Claude's alignment made it worse at trying to mask as Allied Magacomputer than GPT and that's why they're up in arms?
Where was the open letter then? No-one cared.
Recently, Claude was already used by the administration for the operation in Venezuela [1] alongside Palantir. Anthropic did nothing at the time and again...
No-one cared.
Now everyone cares when Anthropic finally said No? The decision for the contract was already predetermined for OpenAI, even with the open letter.
So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?
Either way, it seems that OpenAI and Anthropic were all OK with the US government using their models for warfare so really there is no point in defending both of them or even the employees who knew beforehand.
[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...