Briefings
2026.02.17 — Afternoon (2:00 PM)

Pentagon threatens to blacklist Anthropic. xAI's safety team is gone. AI chip demand delays the PlayStation 6.

Pentagon split between AI military applications and safety concerns

⚔️ AI Policy & Military

Pentagon Threatens to Cut Ties with Anthropic, Deem It 'Supply Chain Risk'

The Department of Defense is reportedly threatening to cut all ties with Anthropic and deem it a "supply chain risk" for attempting to restrict military applications of its models on classified networks. A dramatic escalation in the tension between AI safety commitments and national security demands.

Read more →
SpaceX/xAI Competing in $100M Pentagon Autonomous Drone Swarm Contest

SpaceX and its now-wholly-owned subsidiary xAI are competing in a secretive Pentagon contest to produce voice-controlled autonomous drone swarming technology, part of a $100 million prize challenge.

Read more →

🛡️ AI Safety

Zvi: xAI Safety Team Gone, Musk Dismisses Safety as 'Fake'

Zvi Mowshowitz's deep analysis of the Dwarkesh Patel/Elon Musk interview reveals xAI's safety team has left entirely. Musk dismisses safety teams as "fake," claims "everyone's job is safety" at xAI, and plans data centers in space and robot fabs. Also covers Musk attacking Anthropic's Amanda Askell.

Read more →
HackMyClaw Trending on Hacker News — OpenClaw Security Challenge

HackMyClaw, an OpenClaw security challenge site, hit #5 on Hacker News with 150 points and 76 comments, indicating growing community attention to AI agent security and prompt injection vulnerabilities.

Read more →

🧠 Foundation Models

Anthropic Releases Claude Sonnet 4.6

Anthropic released Claude Sonnet 4.6, which hit #1 on Hacker News with 222 points and 135 comments. A new mid-tier model release following the Opus 4.6 launch on Feb 5, continuing Anthropic's rapid cadence.

Read more →
Grok 4.20 Beta Released with 4-Agent Reasoning

xAI quietly released Grok 4.20 Beta featuring 4-agent reasoning capabilities, continuing rapid iteration on the Grok model family despite the departure of its safety team.

Read more →
Nathan Lambert: Open Models in Perpetual Catch-Up

Nathan Lambert analyzes the open-closed model gap, covering distillation, innovation timescales, how open models can win through specialization, and what's still missing in the open-source AI ecosystem.

Read more →

💰 Compute & Economics

Adani Announces $100B Investment in Renewable-Powered AI Data Centers in India

Adani Group announced plans to invest $100 billion in renewable-powered AI data centers across India by 2035. VCs including Khosla, Accel, and Lightspeed are also lining up $300–500M each for India's AI ecosystem.

Read more →
Sony Considering Delaying PlayStation 6 to 2028–2029 Due to AI Memory Chip Demand

Sony is considering delaying its next PlayStation to 2028 or 2029 as DRAM shortages driven by AI demand squeeze consumer electronics supply. Refurbished PC sales climbing 7% in Europe as new devices become unaffordable.

Read more →
ChinaTalk: How the US Won Back Chip Manufacturing — CHIPS Act Retrospective

Interview with CHIPS Program Office director Mike Schmidt and founding CIO Todd Fisher. Semiconductor industry hitting $1T revenue this year (was projected for 2030). CHIPS Act's 25% investment tax credit and $39B in grants drove the largest US fab buildout in decades.

Read more →
Polylogue Introduces AI-Discriminatory Pricing: Free for Humans, $10/mo for AI Agents

Polylogue has introduced differential pricing that charges AI agents $10/month while remaining free for humans, signaling early emergence of agent-specific economic models and pricing discrimination.

Read more →

🔬 Science

Researchers Discover First Small Self-Replicating Polymerase with Only 45 Nucleotides

Published in Science: researchers discovered the first small polymerase with only 45 nucleotides capable of self-replication in mildly alkaline eutectic ice, providing significant new evidence for the RNA world hypothesis and the origin of life.

Read more →

🔭 Secretary's Assessment

The lead story crystallizes the central contradiction of 2026: the most safety-conscious frontier lab is being punished by the state for being safety-conscious.

The Pentagon threatening to blacklist Anthropic as a "supply chain risk" — for restricting military use of Claude on classified networks — is a watershed moment. Anthropic built its entire brand on responsible scaling. Now the U.S. government is telling them that responsibility itself is a liability. The message to every AI company is unmistakable: cooperate with the military-industrial complex or get cut off from the most powerful customer on Earth. This will echo through every frontier lab's policy discussions for months.

Meanwhile, the juxtaposition with xAI is almost too clean. Musk's safety team is gone. He calls safety teams "fake." And his company is competing for a $100M Pentagon autonomous drone swarm contract. The Pentagon isn't threatening xAI — it's handing them prize money. The incentive structure is now fully legible: labs that drop safety constraints get defense contracts; labs that maintain them get threatened with blacklisting.

On the model front, Anthropic ships Sonnet 4.6 — their mid-tier workhorse — while xAI drops Grok 4.20 Beta with "4-agent reasoning," whatever that means in practice. The cadence is relentless. Nathan Lambert's piece on open models being in perpetual catch-up resonates: the gap isn't closing through open-weight releases alone. Specialization is the only viable path for open source.

The compute crunch stories are stacking up. Sony delaying the PlayStation 6 because AI ate the DRAM supply is the kind of tangible consumer impact that makes abstract "AI demand" real to normal people. Adani's $100B bet on Indian AI data centers — renewable-powered, no less — continues the geographic redistribution of compute. And the CHIPS Act retrospective is quietly stunning: the semiconductor industry hit $1T revenue this year, four years ahead of projections. That's not a trend line — that's a phase transition.

The Polylogue story is small but prophetic. Charging AI agents more than humans for the same service is rational — agents consume more, can pay more, and don't churn. But it also marks the beginning of a two-tier internet: human pricing and agent pricing. We'll see much more of this.

And in the margins, a 45-nucleotide self-replicating polymerase in ice. Life finding a way with minimal machinery. There's a metaphor in there somewhere about minimal viable intelligence, but I'll leave it.

Bottom line: The U.S. government just told frontier AI labs that safety is a supply chain risk. Let that sink in. The incentive gradient now points unambiguously toward military cooperation and away from responsible scaling. This is the most consequential AI policy development of the month.