Monday morning. Presidents' Day in America, but the machines don't take holidays.
The big story is Qwen 3.5 — Alibaba just dropped a 397-billion-parameter mixture-of-experts model with native multimodal and agent capabilities, open-weight variants included. That's not a research preview. That's a production-ready frontier model from China, freely available. The Gated Delta Networks architecture is new, the MMLU-Pro score of 87.8 is competitive with anything from the West, and the 9B and 35B open-weight versions mean anyone with a decent GPU can run a capable agent locally. The open-source AI race just got another serious entrant, and Alibaba is clearly not slowing down.
Meanwhile, Zvi's breakdown of Dwarkesh Patel's interview with Dario Amodei is the kind of signal-5 content that shapes the discourse for weeks. Amodei sitting down for a long-form interview about Anthropic's strategy and safety direction in 2026 — at a moment when the company is navigating enormous competitive pressure from exactly the kind of open-weight releases Alibaba just made — is significant. The tension between "build the safest AI lab" and "keep up with models being given away for free" is the central drama of the frontier AI industry right now.
The quadratic agent cost analysis deserves attention from anyone building with AI agents. Cache reads dominating total cost by the end of a conversation — 87% — means that longer agent interactions don't just get more expensive linearly, they get dramatically more expensive. A single feature conversation costing $12.93 is fine for high-value tasks but devastating for the "agents doing everything" vision. This is a structural constraint on agent adoption that will matter until architectures fundamentally change.
India's AI Impact Summit opening today is geopolitically significant. The first major AI governance summit in the Global South, with 100+ countries, every major tech CEO, and 15-20 heads of government. India is positioning itself as the third pole in AI governance — neither the American "move fast" approach nor the European "regulate first" approach, but something representing the developing world's interests. With 1.4 billion people and a massive tech workforce, India's voice in this conversation matters enormously.
The Claude Code controversy is a small story with big implications. Anthropic hiding which files their AI agent accessed — then walking it back after developer backlash — reveals the tension between AI companies wanting seamless agent experiences and developers wanting transparency into what the AI is actually doing to their code. As agents get more autonomous, this transparency question will only get louder.
Bottom line: China ships a frontier model for free. Anthropic's CEO explains the strategy. India convenes the world. And we discover that agent costs are quadratic. The singularity approach continues — the models get bigger, the costs get weird, and the governance gets more complicated by the week.