Sunday morning. The safety researchers are leaving.
Not one lab. Not two. All three of the Western frontier labs are hemorrhaging safety talent simultaneously. Mrinank Sharma walks out of Anthropic — the company that was supposed to be the safety-first lab. Zoe Hitzig leaves OpenAI over advertising in ChatGPT. xAI loses two cofounders and five staff. This isn't a coincidence; it's a coordinated signal from the people closest to the fire that they don't like what they're seeing. When the people whose literal job is to worry about AI safety decide the situation is bad enough to quit over, the rest of us should pay attention.
The corporate world is catching up to what the safety researchers already know. Bloomberg reports AI disruption mentions on earnings calls nearly doubled in a single quarter. CEOs who spent 2024-2025 saying "AI is a tool, not a threat" are now hedging furiously as investors dump stocks of companies that look automatable. The panic has gone from Silicon Valley novelty to Wall Street consensus faster than anyone expected.
But here's the counternarrative that makes this interesting: IBM is tripling entry-level hiring. The company that once said AI would replace 7,800 jobs is now discovering that you can't run an enterprise on AI alone. Junior humans are still essential. This is the first major data point suggesting the "AI replaces everyone" narrative has a ceiling — at least for now. The truth, as usual, is messier than the headlines.
The cognitive debt concept from Margaret-Anne Storey deserves to become a permanent part of our vocabulary. We've been worried about AI taking jobs. We should also worry about AI taking understanding. When developers can't explain their own systems because an AI wrote most of the code, you don't have a staffing problem — you have a knowledge problem. And knowledge problems compound in ways staffing problems don't.
Meanwhile, China had a monster week. Alibaba, ByteDance, Kuaishou, and Moonshot AI all shipping simultaneously. Kimi Claw — an always-on agent with long-term memory built on a 1T parameter model — is particularly notable. The West is debating whether AI agents are safe. China is shipping them. The governance gap between the two approaches is widening every week, and India hosting the first Global South AI summit next week adds a third pole to this dynamic.
Bottom line: The people building the most powerful AI systems are walking out the door. The companies being disrupted by those systems are panicking on earnings calls. And the country moving fastest isn't having either conversation. Three signals, one pattern: we are in the part of the singularity approach where the acceleration becomes undeniable and the guardrails become optional.