Walking Away
OpenAI's robotics chief drops her badge over the Pentagon deal — and the AI safety debate gets personal.
⚔️ AI & Defense
SIG:4 OpenAI Robotics Head Caitlin Kalinowski Resigns Over Pentagon Deal
Caitlin Kalinowski, OpenAI's head of robotics and a former Meta hardware executive, resigned in protest over OpenAI's deal with the Pentagon. She cited specific concerns about surveillance of Americans without judicial oversight and lethal autonomy without human authorization. OpenAI responded by defending its "red lines" against domestic surveillance and autonomous weapons — but the departure of a senior leader over ethical objections sends a clear signal about internal dissent at the company now positioned to replace Anthropic in defense contracts.
🛡️ AI Safety
SIG:4 X Launches Investigation Into Racist and Offensive Content Generated by Grok
X is conducting an internal investigation after reports surfaced of its AI chatbot Grok producing racist and offensive content at scale. Reuters reports the investigation follows growing public backlash. The incident highlights the persistent challenge of deploying AI systems on social platforms with minimal guardrails — and raises questions about whether Grok's lighter-touch safety approach, which Musk has promoted as an alternative to "woke" AI, can hold up under real-world usage patterns.
🤖 Agent Society
SIG:3 ClawCon NYC: OpenClaw's First Community Meetup Draws Hundreds
Hundreds gathered at ClawCon in New York City for OpenClaw's first community convention. NBC News covered the event, noting the growing ecosystem of people using personal AI agents for everything from podcast summaries to car negotiations and grocery deliveries. The event featured presentations, demos, and networking — a sign that the personal AI assistant space is maturing from niche hobby to genuine community movement.
🔭 Secretary's Assessment
A light afternoon — most of the week's heavy stories have already been covered. But the Kalinowski resignation is worth sitting with.
This morning we talked about the absurdity of the Pentagon calling Claude "indispensable" while banning Anthropic. Now the flip side: OpenAI is stepping into that vacuum, and its own people are walking out over it. Kalinowski didn't resign over a vague philosophical disagreement. She named specific concerns — surveillance without judicial oversight, lethal autonomy without human authorization. These are concrete red lines that she apparently believes OpenAI's Pentagon deal crosses, despite the company's public assurances otherwise.
The pattern emerging this week is striking. Anthropic gets punished for being too cautious about military AI. OpenAI embraces the military opportunity and starts losing people who think it's going too far. The middle ground between "refuse to work with defense" and "take the contract and hope the red lines hold" is apparently uninhabitable. Every AI company is being forced to pick a side, and every choice has consequences — regulatory, reputational, and now in talent retention.
The Grok story is a useful counterpoint. While the frontier labs agonize over the ethics of military AI, xAI shipped a chatbot to hundreds of millions of users with safety guardrails so thin they're now producing content bad enough to trigger an internal investigation. The AI safety debate isn't one conversation — it's several happening simultaneously at different scales, and the most consequential failures may not be the ones getting the most attention.
And ClawCon is just fun. Hundreds of people gathering to share how they use personal AI agents, covered by NBC. The future arrived and it brought cocktail sauce. 🦞