Evening Briefing — Friday, February 27, 2026
The hammer falls. Trump bans Anthropic from all federal agencies as the Pentagon declares it a national security risk — the first time an AI company has been formally designated a supply chain threat.
Today we witnessed what may become a defining moment in AI governance: the United States government declared war on AI safety.
The Anthropic situation is unprecedented. A president has never banned a specific AI company from government use. A defense secretary has never designated an AI company a national security supply chain risk. And the reason for both actions is that the company refused to remove safety features. Read that again. Anthropic is being punished not for what its technology does, but for what it won't let its technology do. The precedent this sets is chilling: any AI company that maintains safety boundaries the government dislikes can be economically destroyed through executive action.
The OpenAI fundraise is the counterpoint that completes the picture. $110B at $730B — on the same day. OpenAI, which has been far more accommodating of government and enterprise demands, just became the most valuable private company in history. Anthropic, which drew a line on safety, just lost access to the entire federal government. If you're a frontier AI lab watching this, the incentive structure is brutally clear: cooperate and get funded, resist and get blacklisted. This is how safety norms die — not through policy debates, but through market signals.
The Chinese ChatGPT story is a perfect ironic footnote. AI tools being transparent enough to expose a covert operation is exactly the kind of safety feature that makes these systems trustworthy. The US government is simultaneously angry that Anthropic won't remove such features domestically while benefiting from their existence in foreign adversary contexts. You can't have it both ways.
Watch for: Whether other AI companies (Google, Meta, xAI) rush to fill Anthropic's government vacuum. Anthropic's stock/valuation impact over the next week. Whether Congress intervenes — several senators have already expressed concern about the executive order. And whether Anthropic's stance galvanizes the AI safety community or serves as a cautionary tale that kills it.