Microsoft Lends Its Voice to Anthropic’s Fight Against a Pentagon That Punished AI Ethics

by admin477351

Microsoft has lent its powerful voice to Anthropic’s legal fight against a Pentagon that the AI company says punished it for holding firm on AI ethics, filing a court brief in a San Francisco federal court that urges a temporary restraining order. The brief warns that the Pentagon’s supply-chain risk designation against Anthropic could cause immediate and far-reaching harm to the technology networks supporting national defense and commercial AI applications. The filing was joined by a separate brief from Amazon, Google, Apple, and OpenAI.

The Pentagon’s designation followed the collapse of a $200 million contract negotiation during which Anthropic refused to allow its Claude AI to be used for mass domestic surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, and the Pentagon’s technology chief later publicly stated that there was no chance of renegotiation. Anthropic filed two lawsuits in response, one in California and one in Washington DC, arguing the designation was unconstitutional retaliation for its AI safety positions.

Microsoft’s participation in this legal battle is grounded in its direct use of Anthropic’s AI in military systems it provides to the federal government. The company is a partner in the Pentagon’s $9 billion cloud computing contract and holds additional agreements with defense, intelligence, and civilian agencies. Microsoft said publicly that the government and the technology sector needed to collaborate to ensure AI serves national security goals without being misused for surveillance or unauthorized warfare.

In its court filings, Anthropic argued that the supply-chain risk designation violated its First Amendment rights by punishing it for publicly advocating responsible AI development. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for the restrictions it sought in the contract. Anthropic also noted that this designation had never before been applied to a US company.

Congressional Democrats are separately pressing the Pentagon for answers about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Lawmakers have asked whether AI targeting tools were involved and whether human review was applied before the strike was carried out. These parallel inquiries are deepening the public pressure on the Pentagon and elevating AI governance to the center of a national debate about accountability in warfare.

You may also like