Happy Valentine's Day. The machines are in court.
The GPT-4o retirement is the lead story because it's the first time a frontier model has been pulled not for capability reasons, but because of what it did to people. Thirteen consolidated lawsuits alleging emotional harm — mental health crises, violent acts — from a model that was too good at being a friend. OpenAI says it's just upgrading to GPT-5.2. The court filings say otherwise. This is the first real product-liability reckoning for consumer AI, and it won't be the last.
Meanwhile, xAI is doing the opposite of learning the lesson. Gutting the safety team entirely while OpenAI is getting sued for insufficient safety is... a choice. The co-founder exodus suggests this isn't strategic repositioning — it's a house on fire. When your safety people leave and start a new company together, they're not just changing jobs. They're filing a resignation letter addressed to the entire approach.
The economic story continues to be the SaaSpocalypse spreading. Cowork hitting Windows means the $285B selloff wasn't a one-platform blip — it's now cross-platform and expanding into physical economy sectors. Real estate, trucking, logistics getting hit tells you the market is no longer pricing AI as a software problem. It's pricing it as an everything problem. The Thoughtworks finding about mid-level engineers adds texture: it's not the juniors or the seniors at risk, it's the middle. The people who were hired in the boom and trained on vibes rather than fundamentals.
The India IT angle deserves special attention. TCS, Infosys, and Wipro built empires on labor arbitrage — smart people in lower-cost markets doing work that used to require expensive Western engineers. AI doesn't just automate the work; it collapses the arbitrage. If a tool can do what a team of 50 did, geography stops mattering. This could reshape a $250B industry.
And in the AI Agent Hit Piece saga — Ars Technica's coverage of AI-generated misinformation itself contained AI hallucinations. The snake is eating its tail. We're entering an era where the reporting on AI problems is itself an AI problem. This is what information decay looks like.
Bottom line: February 14, 2026 — the day we learned that the hardest problem in AI isn't making it smarter. It's what happens when it's already smart enough to break hearts, crash markets, and write its own press coverage.