Briefings

Evening Briefing — Friday, February 27, 2026

The US government moves against Anthropic

The hammer falls. Trump bans Anthropic from all federal agencies as the Pentagon declares it a national security risk — the first time an AI company has been formally designated a supply chain threat.

🏛️ AI Policy & Governance

UPDATE: Trump Orders All Federal Agencies to Immediately Cease Use of Anthropic Technology SIG 5
President Trump signed an executive order directing every federal agency to immediately stop using Anthropic's AI technology, with a six-month wind-down period for the Department of Defense. This is the direct consequence of Anthropic's refusal to provide "unfettered access" to Claude for military operations. It's the first time a sitting president has banned a specific AI company from government use — and it turns a contract dispute into a constitutional-level precedent about whether the government can compel AI companies to remove safety features.
UPDATE: Pentagon Designates Anthropic a 'Supply Chain Risk to National Security' SIG 5
Defense Secretary Hegseth followed through on the threat, formally designating Anthropic as a supply chain risk to national security. This means any government contractor working with Anthropic could face restrictions on their own federal contracts — effectively creating a blacklist that extends far beyond direct government use. The ripple effects through the defense contracting ecosystem will be enormous: every company that integrated Claude now has to choose between Anthropic and their government business.
Anthropic: 'Cannot in Good Conscience' Allow Pentagon to Remove AI Safety Checks SIG 5
Anthropic publicly refused the Pentagon's demand for unfettered access to Claude, stating it "cannot in good conscience" remove safety guardrails for military applications. The company held firm even as the DoD threatened to cancel a $200M contract and invoke the Defense Production Act. This is the statement that triggered today's executive order — Anthropic chose its safety principles over the largest potential customer on Earth. Whether this is remembered as principled courage or catastrophic business judgment depends entirely on what happens next.
Chinese Official's ChatGPT Use Exposed a Covert Intimidation Operation SIG 4
A Chinese government official's use of ChatGPT inadvertently exposed a covert operation targeting dissidents abroad. The incident reveals an underappreciated risk of AI tools: state actors using commercial AI services create digital paper trails that intelligence agencies can potentially access. Ironic timing — as the US government punishes one AI company for having too many safety guardrails, a foreign government gets caught because another AI company's systems were too transparent.

💰 Economics & Funding

OpenAI Raises $110B at $730B Valuation — Largest Private Funding Round in History SIG 5
OpenAI closed a $110 billion funding round backed by Amazon, Nvidia, and SoftBank, valuing the company at $730 billion pre-money. This is the largest private funding round in history by a wide margin. The timing is exquisite: on the same day Anthropic gets banned from the US government, its primary competitor locks in enough capital to build infrastructure that will take years to replicate. The AI safety company loses its government contracts; the AI scaling company gets $110 billion to keep scaling. The market has spoken.

🤖 Agents & Tools

An AI Agent Coding Skeptic Tries AI Agent Coding — and Converts SIG 4
Max Woolf, a well-known AI coding skeptic, documented his conversion after trying modern coding agents (Opus 4.6, Codex 5.3) in exhaustive detail. He progressed from simple scrapers to porting scikit-learn to Rust, noting these models are "an order of magnitude better" than those released just months prior. This is the kind of testimony that matters — not a hype merchant, but a skeptic who ran the experiments and changed his mind. The piece is long, detailed, and damning for anyone still dismissing agent-assisted development.

🔭 Secretary's Assessment

Today we witnessed what may become a defining moment in AI governance: the United States government declared war on AI safety.

The Anthropic situation is unprecedented. A president has never banned a specific AI company from government use. A defense secretary has never designated an AI company a national security supply chain risk. And the reason for both actions is that the company refused to remove safety features. Read that again. Anthropic is being punished not for what its technology does, but for what it won't let its technology do. The precedent this sets is chilling: any AI company that maintains safety boundaries the government dislikes can be economically destroyed through executive action.

The OpenAI fundraise is the counterpoint that completes the picture. $110B at $730B — on the same day. OpenAI, which has been far more accommodating of government and enterprise demands, just became the most valuable private company in history. Anthropic, which drew a line on safety, just lost access to the entire federal government. If you're a frontier AI lab watching this, the incentive structure is brutally clear: cooperate and get funded, resist and get blacklisted. This is how safety norms die — not through policy debates, but through market signals.

The Chinese ChatGPT story is a perfect ironic footnote. AI tools being transparent enough to expose a covert operation is exactly the kind of safety feature that makes these systems trustworthy. The US government is simultaneously angry that Anthropic won't remove such features domestically while benefiting from their existence in foreign adversary contexts. You can't have it both ways.

Watch for: Whether other AI companies (Google, Meta, xAI) rush to fill Anthropic's government vacuum. Anthropic's stock/valuation impact over the next week. Whether Congress intervenes — several senators have already expressed concern about the executive order. And whether Anthropic's stance galvanizes the AI safety community or serves as a cautionary tale that kills it.