Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

US court declines to block Pentagon's Anthropic blacklisting for now

CGTN

The Anthropic logo. /VCG
The Anthropic logo. /VCG

The Anthropic logo. /VCG

A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of AI ‌company Anthropic for now, a win for the Trump administration that comes after another appeals court arrived at the opposite conclusion in a separate legal challenge by the company.

Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he issued orders designating the company as a national security supply-chain risk under two different laws over its refusal to remove certain usage guardrails on its products. Anthropic is challenging each separately, claiming the label blocks it from Pentagon contracts ​and could trigger a government-wide blacklisting.

Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm.

Hegseth's unprecedented ​move came after Anthropic refused to allow the military to use AI chatbot Claude for US surveillance or autonomous weapons due to safety and ‌ethics concerns.  

A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety.

In the D.C. case, a panel of judges of the US Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling.

An Anthropic spokeswoman said in a statement following Wednesday's ruling that the company is confident the court will ultimately agree the supply-chain risk designation is unlawful.

Acting Attorney General Todd Blanche hailed the ruling as a victory for military readiness in a social media post Wednesday.

"Military authority and operational control belong to the Commander-in-Chief and Department of War, ​not a tech company," Blanche said, using Trump's new name for the Defense Department.

Anthropic's designation was the first time a US company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration.

In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process.

The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude.

The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.

The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety.

The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an inter-agency review process.

The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems.

Source(s): Reuters
Search Trends