Artificial Intelligence company Anthropic is suing the US government after the latter labeled the AI company as a “national security risk.” The US Government last week designated Anthropic as a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department. A second, shorter lawsuit was filed in the D.C. Circuit Court of Appeals because another statute the government invoked can only be challenged there, and similar arguments are being made there, Anthropic says.
Anthropic’s lawsuit against the Pentagon alleges that its designation as a “supply chain risk” violates the company’s First Amendment rights and exceeds the government’s authority. The company is asking US courts to undo the supply chain risk designation, block its enforcement, and require federal agencies to withdraw directives to drop the company. While continuing to work with the US government doesn’t appear to be a priority, Anthropic is seeking to prevent officials from blocklisting companies over policy disagreements.
“These actions are unprecedented and unlawful. The Constitution does not allow ​the government to wield its enormous power to punish a company for its protected speech,” Anthropic said. Furthermore, US President Donald Trump has also directed the government to stop working with Anthropic, whose financial backers include Alphabet’s Google (GOOGL) and Amazon.com (AMZN). Trump and Hegseth said there would be a six-month phase-out.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson added. “We will continue to pursue every path toward resolution, including dialogue with the government.”




