A U.S. appeals court on Wednesday (April 8) denied Anthropic’s request to temporarily block the Pentagon’s designation of the AI startup as a national security supply chain risk. However, the court ordered the legal battle to be expedited, acknowledging the case’s urgency.
Core Facts
The three-member appellate panel in Washington, D.C., ruled that the Pentagon’s move to label Anthropic a supply chain risk—typically reserved for organizations from unfriendly foreign countries—would not be paused. The court reasoned that the Pentagon’s actions were critical to securing AI technology during an active military conflict, outweighing the financial harm to Anthropic.
Legal and Policy Context
Anthropic, the creator of the Claude AI model, had sued the Department of Defense in federal court in Northern California, where Judge Rita Lin temporarily froze the sanctions. Lin’s ruling suggested the Trump administration may have violated the law by blacklisting the company for expressing concerns about the Pentagon’s use of its technology.
The appeals court acknowledged that Anthropic raised “substantial challenges” to the sanctions but denied a stay, stating that prolonging the Pentagon’s use of Anthropic’s AI would impose a “substantial judicial imposition on military operations.” Anthropic has argued that the designation is unlawful and retaliatory, violating its First Amendment rights.
Broader Implications
The case marks the first time a U.S. company has been publicly designated a supply chain risk under procurement statutes aimed at protecting military systems. Anthropic’s designation could cost the company billions in lost business and reputational harm, according to executives. The company has filed two separate lawsuits challenging the Pentagon’s actions under different legal grounds.
Anthropic maintains that it is working productively with the government to ensure safe and reliable AI use, while the Pentagon has not publicly commented on the legal proceedings.