Pentagon labels Anthropic a supply-chain risk

Pentagon labels Anthropic a supply‑chain risk, barring Claude from DoD work

The U.S. Pentagon said on March 6 that it has designated Anthropic a “supply‑chain risk,” an action that takes effect immediately and bars government contractors from using Anthropic’s Claude models in work for the Department of Defense, according to Reuters and CBS. The move effectively locks the company out of defense‑related procurement and forces contractors to switch AI vendors for DoD programs.

Anthropic says the designation applies only to DoD‑contract use and has asked federal courts to reverse the decision, calling the action unlawful and seeking a stay while litigation proceeds, AP reported. The case tests how far a frontier‑model vendor can enforce safety‑driven usage limits when government procurement demands broader access.

Supply‑chain‑risk labels are typically reserved for foreign adversaries such as Huawei, making the move against a U.S. AI lab unusual, TechCrunch noted. AP reported Anthropic projects about $14 billion in 2026 revenue and more than 500 customers paying over $1 million annually, suggesting the company can absorb the hit outside defense but faces a precedent‑setting regulatory clash.

CBS said the Pentagon communicated a six‑month phase‑out window for DoD usage, meaning contractors must unwind existing deployments quickly. The immediate change is that Anthropic tools are off‑limits for defense contracts; the next step is whether courts grant a stay or overturn the designation and how procurement rules evolve for AI vendors with strict safety guardrails.

Sources

More From Author

NVIDIA releases Nemotron 3 Super, a 120B open‑weights model for agentic AI

NVIDIA releases Nemotron 3 Super, a 120B open‑weights model for agentic AI

Volkswagen and XPENG’s ID.UNYX 08 enters series production in China

Volkswagen and XPENG’s ID.UNYX 08 enters series production in China

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注