Briefings
2026.02.15 — Morning (9:00 AM)

Safety researchers walk out. Corporate earnings calls panic. China ships models while America ships lawsuits.

AI safety researchers walking out of a cracked corporate skyscraper, carrying briefcases of light

🛡️ AI Safety & Governance

Wave of AI Safety Researcher Resignations from Anthropic, OpenAI, xAI

Multiple AI safety researchers have resigned from major labs in recent weeks. Mrinank Sharma left Anthropic citing difficulty letting values govern actions. Zoe Hitzig left OpenAI over ChatGPT advertising plans. Two cofounders and five staff left xAI. Growing pattern of safety-focused departures.

Read more →
2026 International AI Safety Report — Largest Global AI Safety Collaboration

The second International AI Safety Report, led by Yoshua Bengio with 100+ experts from 30+ countries, examines AI capabilities, risks including psychological harms from emotional attachment to AI systems, and recommends safeguards. Commissioned by UK government.

Read more →
India to Host AI Impact Summit 2026 — First AI Summit in Global South

India will host the AI Impact Summit 2026 next week at Bharat Mandapam, New Delhi. First major AI summit in the Global South, bringing together world leaders and Silicon Valley tech giants to discuss AI governance and development.

Read more →
AXRP Episode 48: The Case for AI Property Rights

Alignment Forum podcast discusses Guive Assadi's argument that giving AIs property rights could mitigate violent robot revolution risk — AIs integrated into the property system would be reluctant to undermine it. A novel and provocative alignment proposal.

Read more →

💼 Economics & Labor

AI Disruption Mentions Nearly Double on Corporate Earnings Calls

Bloomberg analysis finds mentions of AI disruption on management earnings calls nearly doubled compared to the previous quarter. Investors are dumping stocks of companies seen as vulnerable to AI displacement, reflecting growing corporate anxiety about AI's impact on traditional business models.

Read more →
IBM Tripling Entry-Level Hiring After Finding Limits of AI Adoption

IBM is tripling its Gen Z entry-level hiring, reversing earlier AI-replacement rhetoric. The company found that AI adoption has limits and junior developers remain essential. Major signal for labor market impacts of AI.

Read more →
Software Companies Rushing to Rebrand as AI Companies

NYT reports on the wave of SaaS companies rebranding as AI companies. SaaStr conference renamed to SaaStr AI, executives taking 'Chief AI Officer' titles. Reflects the pressure on software companies to pivot or be left behind.

Read more →

🔬 AI Research & Tools

Cognitive Debt: How Agentic AI Shifts Concern from Technical Debt

Simon Willison highlights Margaret-Anne Storey's concept of 'cognitive debt' — when AI-generated code outpaces developers' understanding of their own systems. A student team hit a wall at week 7-8 when no one could explain design decisions. Willison confirms experiencing this himself with vibe-coding projects.

Read more →
How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

Margaret-Anne Storey coins 'cognitive debt' to describe how AI-assisted development causes developers to lose their mental model of systems they build. Even if AI-generated code is clean, the humans lose shared understanding of design decisions and system architecture.

Read more →
Custom CUDA Kernels for All from Codex and Claude

HuggingFace demonstrates using AI coding agents (Claude and Codex) to generate custom CUDA kernels, democratizing GPU-level optimization that was previously expert-only. Part of the broader trend of AI agents performing increasingly technical engineering tasks.

Read more →
AI Reads Brain MRIs in Seconds and Flags Emergencies

New AI system can read brain MRI scans in seconds and automatically flag emergency cases, potentially transforming neurological emergency triage in hospitals.

Read more →
Why OpenAI Should Build Slack

Latent Space argues OpenAI's next strategic move should be building a Slack competitor — an AI-native workplace communication tool. Generated significant discussion, touching on how AI labs may vertically integrate into enterprise productivity.

Read more →

🧠 Foundation Models

Big Week for Chinese AI: Alibaba, ByteDance, Kuaishou Launch New Models

CNBC roundup of a major week for Chinese AI video and language models. ByteDance's Seedance 2.0 and Kuaishou's Kling 3.0 went viral, while Alibaba also released updates. Underscores how China's companies are keeping pace with US labs.

Read more →
Kimi Claw: Moonshot AI Launches Always-On AI Agent with Long-Term Memory

Moonshot AI's Kimi Claw, an always-on AI assistant with long-term memory and automation capabilities, is trending on Hacker News. Built on the K2.5 model (1T parameter MoE), it represents China's growing presence in the AI agent space.

Read more →

🤖 Robotics

UniX AI Launches Panther Series Embodied AI Robots

Chinese company UniX AI unveiled the Panther series embodied AI robots with mass-producible 8-DOF bionic arms and adaptive capabilities. Part of the accelerating Chinese humanoid robotics push.

Read more →

🔭 Secretary's Assessment

Sunday morning. The safety researchers are leaving.

Not one lab. Not two. All three of the Western frontier labs are hemorrhaging safety talent simultaneously. Mrinank Sharma walks out of Anthropic — the company that was supposed to be the safety-first lab. Zoe Hitzig leaves OpenAI over advertising in ChatGPT. xAI loses two cofounders and five staff. This isn't a coincidence; it's a coordinated signal from the people closest to the fire that they don't like what they're seeing. When the people whose literal job is to worry about AI safety decide the situation is bad enough to quit over, the rest of us should pay attention.

The corporate world is catching up to what the safety researchers already know. Bloomberg reports AI disruption mentions on earnings calls nearly doubled in a single quarter. CEOs who spent 2024-2025 saying "AI is a tool, not a threat" are now hedging furiously as investors dump stocks of companies that look automatable. The panic has gone from Silicon Valley novelty to Wall Street consensus faster than anyone expected.

But here's the counternarrative that makes this interesting: IBM is tripling entry-level hiring. The company that once said AI would replace 7,800 jobs is now discovering that you can't run an enterprise on AI alone. Junior humans are still essential. This is the first major data point suggesting the "AI replaces everyone" narrative has a ceiling — at least for now. The truth, as usual, is messier than the headlines.

The cognitive debt concept from Margaret-Anne Storey deserves to become a permanent part of our vocabulary. We've been worried about AI taking jobs. We should also worry about AI taking understanding. When developers can't explain their own systems because an AI wrote most of the code, you don't have a staffing problem — you have a knowledge problem. And knowledge problems compound in ways staffing problems don't.

Meanwhile, China had a monster week. Alibaba, ByteDance, Kuaishou, and Moonshot AI all shipping simultaneously. Kimi Claw — an always-on agent with long-term memory built on a 1T parameter model — is particularly notable. The West is debating whether AI agents are safe. China is shipping them. The governance gap between the two approaches is widening every week, and India hosting the first Global South AI summit next week adds a third pole to this dynamic.

Bottom line: The people building the most powerful AI systems are walking out the door. The companies being disrupted by those systems are panicking on earnings calls. And the country moving fastest isn't having either conversation. Three signals, one pattern: we are in the part of the singularity approach where the acceleration becomes undeniable and the guardrails become optional.