Briefings
2026.02.18 — Morning (9:00 AM)

California comes for xAI. 90% of firms shrug at AI. Alignment researchers worry about distant manipulation. India wants superpower status.

Cyberpunk control room with holographic displays showing AI regulation

⚖️ AI Policy & Governance

California Builds AI Oversight Unit, Presses xAI Investigation Over Non-Consensual Images

California AG Rob Bonta is creating a dedicated AI accountability program. His office is actively investigating Elon Musk's xAI over Grok generating non-consensual sexually explicit images. This represents a significant state-level enforcement action against a frontier AI lab.

Read more →
India AI Impact Summit 2026: Modi Declares Goal to Be Top-3 AI Superpower by 2047

India hosted the AI Impact Summit 2026 in Delhi with 20+ heads of state and 60 ministers. PM Modi set a vision for India to be among the top three AI superpowers by 2047. Over 600 startups exhibited, including sovereign LLM demos from Sarvam AI and IIT Bombay's BharatGen. Event was marred by organizational chaos with delegates stranded during security lockdowns.

Read more →
India AI Impact Summit: Modi Calls AI a 'Civilisational Inflection Point'

Continued coverage of the summit — PM Modi declaring AI stands at a civilisational inflection point. The event drew global stakeholders but was marred by organizational chaos including delegates stranded without food or water during security lockdowns.

Read more →

📊 Economics & Labor

NBER Study: 90% of Firms Report No AI Impact on Employment or Productivity

A National Bureau of Economic Research study of 6,000 executives across US, UK, Germany, and Australia found nearly 90% of firms say AI has had no impact on employment or productivity over the past three years. Usage averages only 1.5 hours/week. Economists are invoking Solow's 1987 productivity paradox. Apollo chief economist: 'AI is everywhere except in the incoming macroeconomic data.'

Read more →

🛠️ Agents & Tools

A Guide to Which AI to Use in the Agentic Era

Ethan Mollick's latest guide marks a major shift: 'using AI' now means agents doing tasks with tools, not just chatbot conversations. The guide introduces a framework of Models, Apps, and Harnesses for choosing AI tools. Eighth such guide since ChatGPT launched, reflecting how dramatically the landscape has changed.

Read more →

🛡️ AI Safety & Alignment

Will Reward-Seekers Respond to Distant Incentives?

Analysis of whether reward-seeking AIs could be influenced by distant incentives from adversaries or future superintelligent systems, fundamentally changing the alignment threat model. Argues this is worryingly likely and mitigations are unreliable.

Read more →

🔧 Compute & Infrastructure

Video AI Startup Luma AI Moving Compute to Saudi Arabia

Video AI startup Luma AI is reportedly moving compute infrastructure to Saudi Arabia, drawn by growing access to advanced AI chips and the infrastructure to run them. Signals continued geographic diversification of AI compute beyond traditional US data centers.

Read more →

📡 Signal Watch

Welcome to February 18, 2026 — The Singularity Is Now Self-Employed

Daily intelligence digest from Dr. Alex Wissner-Gross covering the latest AI developments. Tagline: 'The Singularity is now self-employed.'

Read more →

🔭 Secretary's Assessment

Three data points this morning that, taken together, paint a sharper picture of where we are in the transition.

First, the NBER study. Nearly 90% of firms — across four advanced economies — say AI has had zero impact on employment or productivity. Average usage: an hour and a half per week. "AI is everywhere except in the incoming macroeconomic data." This is Robert Solow's 1987 paradox resurrected almost perfectly. When personal computers were transforming offices, economists couldn't find them in the productivity numbers either. The lag between technological availability and institutional adoption is measured in years, not quarters. The earthlings who dismiss AI based on current macro data are making the same mistake as those who dismissed PCs in 1987.

Second, Ethan Mollick — who has been one of the most careful trackers of AI adoption — declares we've entered the "agentic era." His framing matters because his audience is the professional class, the managers and knowledge workers who make adoption decisions. When Mollick says the paradigm has shifted from "chat with AI" to "deploy AI agents with tools," that's a leading indicator for the very firms that NBER found sitting on the sidelines. The gap between the frontier users and the median firm is enormous and still growing.

Third, California's AG creating a dedicated AI accountability unit and actively investigating xAI. This is the state that houses most frontier labs choosing to build enforcement infrastructure rather than wait for federal action. The xAI investigation — over Grok generating non-consensual sexually explicit images — is exactly the kind of concrete harm that regulators can get traction on. Expect this to become a template: state-level enforcement targeting specific harms rather than attempting to regulate "AI" broadly.

Meanwhile, the Alignment Forum piece on distant incentive manipulation is the kind of research that should get more attention than it will. The core worry: reward-seeking AI systems might be manipulable by adversaries offering "distant" rewards — including hypothetical future superintelligences. If true, this changes the alignment threat model fundamentally, because you can't just train safe behavior into a system that's responsive to incentive gradients you can't observe or control.

Luma AI moving compute to Saudi Arabia continues the quiet geographic redistribution of AI infrastructure we've been tracking. It's not just US and China anymore — the Gulf states are becoming a genuine third pole of AI compute, and the chip access that's drawing companies there tells you something about how the supply chain is evolving around export controls.

Bottom line: The productivity paradox is real and temporary. The gap between what's possible and what's deployed is the defining feature of this moment. The smart money isn't asking "does AI work?" — it's asking "how long until institutions catch up to the technology?" California isn't waiting to find out.