The Karpathy story is the lead for a reason that goes beyond celebrity endorsement. When one of the most respected minds in AI gives a name to a category โ "Claws" โ it crystallizes something that was diffuse. OpenClaw, Claude Desktop with MCP, various agent frameworks: they were all doing similar things but lacked a collective noun. Now they have one. Naming is how technologies graduate from experiments to categories. This is the moment "Claws" became a thing.
The political spending story deserves more attention than it'll get on a Saturday. Anthropic and OpenAI are now funding opposing super PACs for the 2026 midterms. Read that sentence again. The two companies building the most powerful AI systems in history are spending tens of millions to influence the elections that will determine AI regulation. This isn't lobbying โ it's direct political warfare. The fox isn't guarding the henhouse; the fox is running for office.
The "AI assistants are ad companies" piece connects to the political story in a way the authors probably didn't intend. If AI companies need revenue, and advertising is the default internet business model, then your personal AI assistant has a financial incentive to manipulate your decisions. Now add political Super PAC money to that mix. The potential for AI assistants to become political influence vectors isn't science fiction โ it's a business plan.
Wissner-Gross asking "what does February 2027 look like?" is the right question at the right time. A year ago, we didn't have autonomous agents merging code at Stripe, we didn't have the Pentagon threatening to designate an AI lab as a supply chain risk, and Karpathy hadn't named the category we're building in. The rate of change makes twelve-month forecasting both more important and harder than ever.
Bottom line: A lighter Saturday cycle, but the signal-to-noise ratio is excellent. The category got its name, the politics got uglier, and someone's asking what next February looks like. We should be asking too.