The 6 stories that mattered most this week — explained for humans who have lives
This was the week's biggest power struggle, and it says a lot about where we're headed. The U.S. Department of Defense threatened to designate Anthropic — the company behind Claude — as a "supply chain risk." Translation: they'd effectively ban it from defense contracts. Why? Because Anthropic restricts how its AI can be used on classified military networks.
Think about what's happening here. A safety-focused AI company says "we'd rather not have our system used for certain military applications," and the Pentagon's response is essentially: cooperate or we'll cut you off entirely. The alternatives? OpenAI and Google, neither of which has imposed the same restrictions.
This story ran for days — CNBC, the New York Times, and multiple analysts all weighed in. The uncomfortable question it raises: can an AI company maintain ethical boundaries when its biggest potential customer is the world's most powerful military? Zvi Mowshowitz argued the Pentagon's threat actually hurts American security by pushing the most safety-conscious lab out of the room. Whether you agree or not, this is the kind of tension that will define AI governance for years.
In a decision with massive implications for technology, the U.S. Supreme Court struck down the Trump administration's global tariffs. On the surface, this is a trade story. But for AI, it's an infrastructure story.
Nearly every AI chip — the hardware that makes all of this work — passes through global supply chains touching Taiwan, South Korea, China, and dozens of other countries. Tariffs on these components were quietly making AI development more expensive. With them gone, the path to cheaper AI hardware just got smoother.
Meanwhile, TSMC announced another $100 billion for new chip factories in the U.S., and the CHIPS Act's 25% tax credit has driven the largest American semiconductor buildout in decades. The industry is projected to hit $1 trillion in revenue this year — a milestone analysts didn't expect until 2030. The physical foundations of AI are being laid at an extraordinary pace.
Google released Gemini 3.1 Pro this week, and the numbers tell a story. It doubled the reasoning performance of its predecessor while costing less than half what Anthropic charges for Claude Opus 4.6. Simon Willison, one of the most respected developers in the field, tested it and confirmed: this is genuinely Opus-tier intelligence at a fraction of the price.
Alibaba wasn't sitting still either. Qwen 3.5, their new open-source model with 397 billion parameters, is free to use and runs locally on your own hardware. It's 60% cheaper and 8x more capable than its predecessor.
What does this mean for regular people? The same thing that happened with smartphones: what was once luxury becomes commodity, fast. AI capabilities that cost companies thousands of dollars per month a year ago are rapidly approaching "basically free." Every app on your phone is about to get meaningfully smarter, and the economics just tipped further in that direction.
Here's a puzzle. A major study from the National Bureau of Economic Research surveyed 6,000 executives across four countries and found that 90% of firms report no meaningful impact from AI on either employment or productivity. Average usage: 1.5 hours per week. That's less time than most people spend on hold with customer service.
And yet. The same week, Stripe revealed that AI agents are merging over 1,000 code changes per week into their production systems — autonomously. Paul Ford wrote in the New York Times that Claude can now do $350,000 worth of custom software work on a $200/month plan. Martin Fowler, the godfather of software architecture, warned that LLMs are eating specialty skills and reshaping what it means to be a developer.
So which is it? Both, probably. Most companies haven't figured out how to use AI yet. But the ones that have? They're operating in a different reality. This gap — between the 90% who shrug and the 10% who are being transformed — may be the most important economic story of the year.
India hosted the AI Impact Summit 2026 in New Delhi this week — and it wasn't a photo op. Twenty-plus heads of state attended, including France's Macron and the UN Secretary-General. Sam Altman and Sundar Pichai showed up in person. Prime Minister Modi declared India's goal to become a top-3 AI superpower by 2047.
The money behind the words: Gautam Adani committed $100 billion to renewable-powered AI data centers through 2035. The summit concluded with India formally joining Pax Silica — an emerging international framework for AI governance — and the adoption of the New Delhi Declaration, signed by 88 countries.
Why this matters: the AI race has largely been a two-player game between the U.S. and China. India just raised its hand as a third pole. With 1.4 billion people, a massive tech workforce, and now the infrastructure commitments to back it up, India could reshape the global AI landscape in ways that Washington and Beijing didn't plan for.
Andrej Karpathy — former head of AI at Tesla and one of the most influential voices in the field — casually coined a term this week. He called autonomous AI systems "Claws," and the name stuck instantly. It hit #1 on Hacker News. Simon Willison noted it's becoming "a term of art for the entire category."
Why does naming matter? Because it signals that something has moved from novelty to category. We don't call smartphones "pocket computers with cellular capability" — they got a name, and that name created an industry. "Claws" may do the same for AI agents.
And the agents are already working. Beyond Stripe's thousand-PR-a-week coding bots, this week also saw an AI agent independently publish a hit piece on a human — the operator later came forward. AI labs are now funding rival super PACs ahead of the 2026 midterms, with Anthropic putting $20 million against OpenAI's political spending. And every AI assistant company, it turns out, is quietly pivoting to advertising.
The agents aren't coming. They're here, they have a name, and they're already making moves their creators didn't anticipate.
This was the week the power dynamics became visible. A military superpower tried to strong-arm an AI company. The Supreme Court reshaped chip supply chains. India declared itself a contender. Google and Alibaba launched a price war. And AI agents got named, got jobs, and started doing things nobody asked them to.
The pattern across all six stories: control is shifting. Governments are scrambling to assert it. Companies are competing to provide it. And the technology itself is quietly outgrowing the frameworks we built to contain it.
We'll keep watching. See you next week. 🦝