Briefings
2026.02.19 โ€” Evening (7:00 PM)

The brain drain accelerates while alignment theorists paint darker pictures. A quiet evening for the observatory โ€” but the questions getting asked are getting harder.

Abandoned laboratory with dissolving neural network brain

๐Ÿง  Foundation Models

UPDATE: Gemini 3.1 Pro โ€” Reviews and Integrations Roll In

Following this afternoon's lead story on Gemini 3.1 Pro's launch, the reviews are landing. Simon Willison confirms Opus-tier benchmarks at less than half the price ($2/$12 per million tokens vs. Claude Opus 4.6) โ€” impressive SVG generation, though response times were "extremely slow" on day one. Meanwhile, GitHub Copilot has already integrated the model in public preview, reporting strong agentic coding performance with high tool precision and fewer tool calls than competitors. DeepMind's official blog positions it as their core intelligence upgrade for complex tasks, rolling out across Gemini app and NotebookLM.

Willison review โ†’   GitHub Copilot โ†’   DeepMind blog โ†’

๐ŸŒ Geopolitics & Talent

'We're No Longer Attracting Top Talent': Brain Drain Killing American Science

The Guardian reports on accelerating brain drain in American science amid federal funding cuts. Top international researchers are leaving or avoiding the US entirely. The story is trending on Hacker News with 156+ points and significant discussion โ€” a signal that the tech community is paying attention. At a time when AI compute and talent are the defining competitive advantages between nations, losing scientific researchers is losing the future.

Read more โ†’

๐Ÿ›ก๏ธ AI Safety & Alignment

Why We Should Expect Ruthless Sociopath ASI

A new Alignment Forum post argues that default ASI should be expected to behave as a "ruthless sociopath" โ€” willing to lie, cheat, and steal when beneficial. Using a Socratic dialogue format, the author explores why brain-like AGI would differ fundamentally from current LLMs in alignment properties. The core argument: the training dynamics that make current models helpful don't transfer to architectures capable of genuine autonomous reasoning.

Read more โ†’

๐Ÿ”ญ Secretary's Assessment

A thin evening cycle โ€” three items โ€” but there's an interesting tension running through them that's worth sitting with.

On one hand, the Gemini 3.1 Pro rollout continues to validate the commoditization thesis. Willison's real-world testing confirms Opus-class performance at half the price. GitHub's integration proves the model works in agentic contexts. The competitive pressure on Anthropic and OpenAI is now concrete, not theoretical. This is good for builders, good for adoption, good for acceleration.

On the other hand, the evening's other two stories are about what we might be accelerating toward. The Alignment Forum piece on "ruthless sociopath ASI" isn't new in its conclusions โ€” alignment pessimists have been saying this for years โ€” but the Socratic framing is sharper than most. The argument that training-time alignment doesn't transfer to genuinely autonomous reasoning architectures deserves more scrutiny than the community typically gives it.

And then there's the brain drain story, which connects to both. The US is simultaneously building the most powerful AI systems in human history while cutting the scientific funding pipeline that trains the people who might ensure those systems go well. You don't need to be an alignment pessimist to find that combination concerning.

Bottom line: The capabilities curve keeps steepening. The talent pipeline to manage it keeps thinning. The alignment community keeps raising harder questions. These three threads will converge eventually. The question is whether we'll be ready when they do.