Amodei sent a letter to staff explaining that Anthropic would not take away two specific safety measures, even if it meant the company might not be part of the military’s supply network. He stated the Pentagon wanting ‘any legal use’ would, in effect, get rid of any restrictions on what the company’s AI models were used for. Amodei said the company turned down the request for both moral and technical reasons. He said this position was like things Anthropic had done before – the company already bans widespread spying within the country and the use of weapon systems that work fully on their own. These rules, Amodei added, were what led to the company agreeing to a $200 million contract with the military, but with certain conditions.
Amodei’s explanation to employees
Amodei told the people who worked for him that Anthropic would not remove the two safety features, even if doing so put its place in military supply chains at risk. He wrote that the Pentagon’s wish for ‘any lawful use’ would really remove all limits on how the company’s AI models could be used. He said Anthropic refused for ethical and technical reasons.
He said this was in line with what the company had done in the past: Anthropic already does not allow spying on people in the country on a large scale, and does not allow the use of fully self-operating weapon systems. Amodei pointed out that these rules had guided the $200 million contract with the military that the company accepted, but under controlled conditions.
The main worry: ‘any lawful use’ and lines that can’t be crossed
The phrase ‘any lawful use’ became the main point of the disagreement. Amodei made the argument that it would make Anthropic have to permit uses it thought were very dangerous. He presented the issue as going over two lines it wouldn’t cross – watching people in the country on a large scale, and weapons that work by themselves, without good human control.
Amodei stressed the limits of how dependable present cutting-edge AI models are, saying that AI is not yet safe enough to work on its own in important defense situations. He said agreeing to the request would go against Anthropic’s safety goals and put the company in serious moral and legal trouble.
How this is different from OpenAI’s choice and what the leaders thought
Amodei hinted that the different paths the companies took came down to sticking to beliefs versus being willing to accept language that allowed for wider use. His note, without saying it directly, showed the difference between Anthropic’s refusal and the deal the Pentagon made with another large AI company. He suggested that choices by leaders, how much risk they were willing to take, and how they saw their duty to the public, shaped these results.
He also said the Pentagon had given out mixed signals: one part called Anthropic a security risk, while another said the company’s tools were very important for the country’s defense. Amodei said this conflict did not give a reason to give up the company’s main safety features.
What the public thought and what the market showed
Amodei noted that the public generally liked Anthropic’s principled position. Rankings of apps and how users acted changed after the other deal got attention, and the number of people downloading Anthropic’s Claude went up a lot. Amodei dismissed some comments online as what you’d expect, even calling ‘some Twitter fools’ foolish, but argued that the main view of people was that Anthropic’s way of doing things could be trusted.
This reaction shows how how a company is seen by others now affects AI companies. Customers, workers, and those who make rules are watching how companies balance making money with safety and people’s freedoms.
Continuing talks and what this means for rules
Anthropic has not said it won’t work with the government. The company has reportedly started talks again with high-level defense officials to decide what uses would be acceptable, and to avoid being labeled a ‘supply chain risk’ which would be bad. Those who are negotiating want to keep both partnerships for national security and the company’s safety promises.
This situation shows a larger problem in making rules: how to allow for responsible use of AI in defense, without making normal things that threaten people’s freedoms or encourage using self-governing systems too soon. Those who make rules and companies will need clearer rules and better watching to make these goals fit together.
Conclusion
Amodei’s letter presents the decision as more than just a disagreement about a contract. It is a public statement about what limits a safety-first AI company will accept when the country’s security needs come up against civil rights and how well the technology works. What is at stake is the company’s good name, how ready the military is, and what the long-term rules will be for how advanced AI is controlled.





