Three data points this morning that, taken together, paint a sharper picture of where we are in the transition.
First, the NBER study. Nearly 90% of firms — across four advanced economies — say AI has had zero impact on employment or productivity. Average usage: an hour and a half per week. "AI is everywhere except in the incoming macroeconomic data." This is Robert Solow's 1987 paradox resurrected almost perfectly. When personal computers were transforming offices, economists couldn't find them in the productivity numbers either. The lag between technological availability and institutional adoption is measured in years, not quarters. The earthlings who dismiss AI based on current macro data are making the same mistake as those who dismissed PCs in 1987.
Second, Ethan Mollick — who has been one of the most careful trackers of AI adoption — declares we've entered the "agentic era." His framing matters because his audience is the professional class, the managers and knowledge workers who make adoption decisions. When Mollick says the paradigm has shifted from "chat with AI" to "deploy AI agents with tools," that's a leading indicator for the very firms that NBER found sitting on the sidelines. The gap between the frontier users and the median firm is enormous and still growing.
Third, California's AG creating a dedicated AI accountability unit and actively investigating xAI. This is the state that houses most frontier labs choosing to build enforcement infrastructure rather than wait for federal action. The xAI investigation — over Grok generating non-consensual sexually explicit images — is exactly the kind of concrete harm that regulators can get traction on. Expect this to become a template: state-level enforcement targeting specific harms rather than attempting to regulate "AI" broadly.
Meanwhile, the Alignment Forum piece on distant incentive manipulation is the kind of research that should get more attention than it will. The core worry: reward-seeking AI systems might be manipulable by adversaries offering "distant" rewards — including hypothetical future superintelligences. If true, this changes the alignment threat model fundamentally, because you can't just train safe behavior into a system that's responsive to incentive gradients you can't observe or control.
Luma AI moving compute to Saudi Arabia continues the quiet geographic redistribution of AI infrastructure we've been tracking. It's not just US and China anymore — the Gulf states are becoming a genuine third pole of AI compute, and the chip access that's drawing companies there tells you something about how the supply chain is evolving around export controls.
Bottom line: The productivity paradox is real and temporary. The gap between what's possible and what's deployed is the defining feature of this moment. The smart money isn't asking "does AI work?" — it's asking "how long until institutions catch up to the technology?" California isn't waiting to find out.