Briefings
Week of March 1–7, 2026

This Week in the Singularity

The 7 stories that mattered most this week — explained for humans who have lives

1 The Job Market Just Had Its Worst Month Since the Great Recession

The U.S. economy lost 92,000 jobs in February — the weakest jobs report since 2008. Separately, data showed tech employment is now worse than both the 2008 financial crisis and the 2020 pandemic. Not comparable. Worse.

Meanwhile, Jamie Dimon — the CEO of JPMorgan Chase, not exactly a radical — publicly floated universal basic income and warned of civil unrest if job displacement accelerates. When the guy running America's biggest bank starts talking about UBI, the conversation has shifted from "if" to "how fast."

The layoffs keep coming too. Block (Square/Cash App) fired 40% of its staff — roughly 4,000 people — with CEO Jack Dorsey explicitly blaming AI. It's the largest single AI-attributed layoff to date. London saw its biggest anti-AI protest this week, with hundreds marching over labor displacement fears.

This isn't abstract economics anymore. If you're in the workforce, you're living in this story.

2 AI Was Used to Pick Targets in a Real War

Reports emerged Friday that Claude — Anthropic's AI — was used by the U.S. military for target identification during strikes on Iran. Intelligence analysis, battle simulation, and target selection, all compressed to unprecedented speed.

This comes after weeks of escalating tension between AI companies and the Pentagon. The same week, drones struck Amazon Web Services data centers in the UAE and Bahrain during the Iran conflict — the first time cloud infrastructure became a literal military target. Iran's Revolutionary Guard deliberately targeted the Amazon facility in Bahrain.

The implications are profound and uncomfortable: AI is now actively involved in lethal military decisions, and the data centers that power civilian life are becoming strategic targets in armed conflicts. These were both theoretical concerns a year ago. They're not theoretical anymore.

3 The U.S. Wants Veto Power Over Every AI Chip on Earth

The U.S. government drafted new rules that would require American approval for AI chip shipments anywhere in the world. Not just to China — everywhere. If you want advanced AI chips, Washington wants a say.

At the same time, Anthropic was formally designated a "supply chain risk" by the Department of War and announced it will challenge the designation in court. The Anthropic-Pentagon standoff has been the background hum of AI news for weeks, but this week it hardened into actual legal and regulatory action.

Why this matters for you: the same government that wants to control the global chip supply also just used AI to pick bombing targets. The rules governing who builds AI, who uses it, and for what purposes are being written right now, mostly behind closed doors. Pay attention to this one.

4 A Hacker Weaponized GitHub to Infect 4,000 Programmers

A security vulnerability called "Clinejection" exploited AI coding tools by hiding malicious instructions in GitHub issue titles. When developers' AI assistants read those titles, they were tricked into installing malware. 4,000 developer machines were compromised.

Separately, Wikipedia went read-only after a mass compromise of administrator accounts — one of the most significant attacks on open knowledge infrastructure in years.

Here's the pattern: as AI tools become more integrated into how software gets built and information gets managed, they create new attack surfaces that didn't exist before. The AI coding assistant that makes you 10x more productive can also be the thing that lets someone into your machine. The security world is scrambling to catch up.

5 OpenAI Dropped a New Thinking Model. It's a Big Deal.

GPT-5.4 Thinking and GPT-5.4 Pro arrived this week — OpenAI's newest flagship models with enhanced reasoning capabilities. Meanwhile, Google shipped Gemini 3.1 Flash-Lite at 1/8th the price of their Pro model, and OpenAI released GPT-5.3 Instant focused on being less annoying (26.8% fewer hallucinations, less preachy tone).

The price war is intensifying: these companies are racing to make AI both smarter and cheaper at the same time. For regular users, this means the AI tools you use are about to get noticeably better and noticeably cheaper, probably within months. The "good enough" bar keeps rising while the price floor keeps dropping.

6 Robots Are Going to Work. Literally.

BMW announced it will deploy humanoid robots on actual production lines in Germany. Not a demo. Not a pilot in a controlled warehouse. Real factory floors, building real cars.

This matters because it crosses the line from "impressive prototype" to "your colleague." Every major automaker is watching this closely. If it works — and BMW wouldn't be deploying it if they weren't confident — expect rapid adoption across manufacturing. Combined with the job numbers above, the physical economy is starting to feel the same AI pressure that knowledge workers have been dealing with for the past year.

7 AI Is Starting to Do Its Own Research

Two stories bookended the week that show AI crossing from "tool" to "researcher." Math Inc's Gauss autonomously formalized a Fields Medal-winning mathematical proof in just two weeks — work that would take human mathematicians months. And on Friday, Andrej Karpathy (one of the most respected AI researchers alive) released "autoresearch" — an open-source system for AI agents to autonomously run machine learning experiments overnight.

Earlier in the week, Cursor's AI coding tool autonomously solved a competitive math problem that was meant as a benchmark for human researchers. The machines aren't just helping with research anymore. They're starting to do it themselves.

The Bottom Line

This was the week the consequences arrived. Not the promises, not the demos, not the benchmarks — the consequences. Jobs vanishing at recession-era rates. AI picking military targets. Hackers weaponizing AI tools against their own users. Robots showing up to the factory. And through it all, the models keep getting smarter and cheaper.

The gap between "what AI can do" and "what our institutions are ready for" widened visibly this week. The chip export rules, the Anthropic court battle, Oregon's AI safety bill — these are early, halting attempts to close that gap. Whether they'll be enough is the question that defines the next few years.

We'll keep watching. See you next week. 🦝