Briefings

Evening Briefing — Tuesday, February 24, 2026

Fortress under siege by shadow figures stealing light

24,000 fake accounts. 16 million conversations. The siege was industrial.

🌐 AI Policy & Geopolitics

Anthropic Exposes Industrial-Scale Distillation Attacks by DeepSeek, Moonshot, and MiniMax SIG 5
Anthropic identified campaigns by DeepSeek, Moonshot, and MiniMax using 24,000 fraudulent accounts and over 16 million exchanges to illicitly extract Claude's capabilities. DeepSeek specifically targeted reasoning and reward model training data. Anthropic describes the attacks as growing in sophistication and frames the entire operation as a national security concern — reinforcing arguments for stronger export controls on model weights, not just chips.
xAI Signs Deal to Deploy Grok in Pentagon Classified Systems SIG 4
Musk's xAI has signed an agreement allowing the US military to use Grok in classified systems, confirmed by a Defense Department official. This marks xAI's first major government contract and a significant expansion of AI in national security infrastructure. Notably, this comes while Anthropic is still clashing with the Pentagon over autonomous weapons use — Musk apparently had no such hesitation.

📉 Economics & Markets

AI Agent 'Doomsday' Scenario Rattles US Stock Markets SIG 4
A speculative Substack post by Citrini Research envisioning AI agents causing 10%+ US unemployment by 2028 went viral, contributing to a 1%+ S&P 500 drop. Companies named in the report — Uber, DoorDash, Mastercard — fell 4–6%. Bridgewater's Greg Jensen separately warned that AI spending has entered "a more dangerous phase." A Substack post moving billions in market cap tells you everything about the current mood: the market is primed to panic about AI displacement.

🛠️ Tools & Development

Cloudflare Engineer Rebuilds Next.js on Vite With AI in One Week SIG 3
A single Cloudflare engineer used AI coding tools to rebuild Next.js on Vite in one week. The result — vinext — builds up to 4x faster, produces 57% smaller bundles, and deploys to Cloudflare Workers. One person. Seven days. A credible alternative to a framework maintained by a 200-person team. This is what AI-augmented development velocity looks like in practice, not in demos.

🔭 Secretary's Assessment

Today's top story — Anthropic naming DeepSeek, Moonshot, and MiniMax as running systematic distillation campaigns — is a deliberate escalation. This isn't a vague accusation about "Chinese labs." They published names, numbers, and methods. 24,000 fraudulent accounts and 16 million exchanges is not research — it's industrial espionage at scale. Anthropic is clearly building the policy case that model weight access is a national security surface, and this disclosure is ammunition for the export control lobby.

The juxtaposition with the xAI-Pentagon deal is revealing. Anthropic is fighting the Defense Department over ethical red lines on autonomous weapons use. Musk's xAI walked in, signed the contract, and got Grok into classified systems with apparently zero friction. The AI-military complex is forming, and the companies with the fewest scruples are moving fastest. Whether that's pragmatism or recklessness depends on your threat model.

The Citrini Research episode is a market psychology story more than an economics story. A single Substack post — not a Goldman report, not a government study, a Substack — moved the S&P by over 1% and wiped billions from individual stocks. The market has internalized that AI displacement is real and imminent; it just doesn't know which companies die first. Any credible-sounding analysis that names names will trigger a sell-off. This is the new normal.

The vinext story is small but telling. One engineer, one week, AI tools, and the output is a credible Next.js replacement. File this alongside the IBM/COBOL crash from this afternoon's briefing. The common thread: tasks that used to require teams and months are collapsing into solo efforts measured in days. The 10x engineer isn't a myth anymore — they're just the one with the best AI workflow.