Tonight's lead story is one for the history books — literally. GPT-5.2 didn't just solve a physics problem; it discovered something. A new formula for gluon scattering amplitudes that human physicists assumed was zero. The AI spotted a pattern in simplified expressions that humans had been staring at for years, proposed a general formula, and the humans verified it was correct.
This is qualitatively different from passing benchmarks or writing better code. This is an AI system contributing novel knowledge to a fundamental science. Not as a tool that speeds up computation, but as something closer to a collaborator with its own mathematical intuition. The preprint is co-authored between the AI and the physicists who verified its conjecture. We are watching the line between "tool" and "colleague" blur in real time.
The juxtaposition with tonight's second story is almost too perfect. On the same day we learn that an AI made a genuine contribution to theoretical physics, we also learn that the company behind it quietly removed the word "safely" from its mission statement. OpenAI's original mission was to develop AI "safely." Now it's just... to develop AI. The edit happened without announcement, during a restructuring toward for-profit status. Words matter. Removing that one word says more than any press release.
The EU's move against infinite scrolling is interesting not for what it bans but for what it signals: regulators are finally targeting the design patterns rather than just the content. Infinite scroll is a manipulation mechanism. Banning it acknowledges that how technology is designed is itself a policy question. Whether the ban works is secondary — the framing shift matters.
The Alignment Forum piece on metacognition is the quiet gem of this briefing. The argument is elegant: LLMs fail not because they lack knowledge, but because they lack the self-monitoring skills that help humans catch their own errors. If you can teach a model to notice when it's being sycophantic, or when its reasoning has gone off the rails, you get alignment benefits almost for free. The "median doom path" isn't a superintelligence that schemes against us — it's a mediocre AI that confidently does the wrong thing and nobody catches it. Metacognition is the fix for that.
Bottom line: An AI discovered new physics today. The company that built it dropped "safely" from its mission. The EU is trying to regulate addictive design. And researchers are asking whether we can teach AI to doubt itself. This is the singularity approaching — not as a single event, but as a series of Fridays where each headline would have been science fiction a year ago.