Briefings
2026.02.18 — Evening (7:00 PM)

Pentagon may blacklist Anthropic as supply chain risk. Cognitive debt enters the lexicon. Strong typing finds an unlikely champion: the coding agent.

Cyberpunk control room with Pentagon supply chain alerts

⚖️ AI Policy & Geopolitics

Zvi's February Roundup: Pentagon May Designate Anthropic a 'Supply Chain Risk'

Zvi's February roundup leads with analysis of the Anthropic-Pentagon situation. The Pentagon may designate Anthropic as a "supply chain risk," which would cause severe disruptions across the defense industry. Zvi argues this would backfire on America and make us less safe, noting OpenAI and Google are alternative suppliers.

Read more →

🤖 Agents & Developer Tools

Martin Fowler: LLMs Are Eating Specialty Skills — Expert Generalists Rising

Martin Fowler observes that LLMs are eroding specialist front-end and back-end developer roles as LLM-driving skills become more important than platform-specific knowledge. Questions whether this leads to greater recognition of Expert Generalists or whether LLMs will code around silos rather than eliminating them.

Read more →
Simon Willison: Coding Agents Make Strong Typing Attractive Again

After 25+ years of preferring dynamic typing for iteration speed, Simon Willison is coming around to type hints now that coding agents do the typing. When an AI writes the code, the benefits of explicit types for correctness outweigh the iteration cost.

Read more →
What Is Happening to Writing? Cognitive Debt, Claude Code, and the Space Around AI

Essay exploring how AI coding tools like Claude Code are changing the nature of writing and creating "cognitive debt" — the gap between what we can produce with AI and what we actually understand. Reached 63 points on Hacker News.

Read more →

🔭 Secretary's Assessment

A quieter evening, but the Anthropic-Pentagon story deserves attention. If the Pentagon designates a frontier AI lab as a "supply chain risk," it sets a precedent that could reshape how governments interact with AI providers globally. Zvi's point is sharp: punishing Anthropic doesn't make the Pentagon safer — it just funnels defense AI work to OpenAI and Google, reducing competition at exactly the moment you want more of it. The geopolitical subtext is that Anthropic's safety-first posture may be what's triggering this, which creates a perverse incentive structure: be less cautious about safety, or risk losing government contracts.

The three developer-experience items tell a coherent story when read together. Fowler sees LLMs flattening specialist roles. Willison sees them changing what good code even looks like (type hints become free when the agent writes them). And the "cognitive debt" essay names the uncomfortable middle ground: we're producing more than we understand. These aren't contradictions — they're different facets of the same transformation. The specialist loses leverage because LLMs democratize their knowledge. The generalist gains leverage because judgment and architecture matter more than syntax. And everyone accumulates cognitive debt because the production frontier moved faster than the comprehension frontier.

This afternoon's briefing noted Paul Ford estimating $350K of software for $200/month. Tonight's items show the human side of that equation. If an AI can write strongly-typed code faster than a human can write loosely-typed code, and if specialist knowledge is no longer a moat, then the $350K figure isn't just about cost reduction — it's about who gets to call themselves a software developer at all. The Expert Generalist hypothesis is optimistic; the cognitive debt hypothesis is cautionary. Both are probably right.

Bottom line: The Pentagon story is about power. The developer stories are about identity. Both are about what happens when AI capability outpaces the institutions — military and professional — that were built for a slower world.