Briefings
2026.02.13 — Evening (7:00 PM)

GPT-5.2 discovers new physics. OpenAI drops "safely" from its mission. The EU targets infinite scrolling. The ironies write themselves.

AI neural network discovering physics equations among gluon particles in deep space

🧠 AI Research & Foundation Models

GPT-5.2 Derives a New Result in Theoretical Physics

OpenAI published a preprint showing GPT-5.2 Pro conjectured a new formula for gluon scattering amplitudes that was previously assumed to be zero. The AI simplified complex expressions, spotted a pattern, and proposed a general formula that human physicists then verified. First major AI-derived result in theoretical physics.

Read more →
Human-Like Metacognitive Skills Will Reduce LLM Slop and Aid Alignment

Argues that LLMs lack metacognitive skills that help humans catch errors, and that improving these skills could be net positive for alignment. Better metacognition would reduce sycophancy, catch mistakes, and help LLMs organize complex reasoning — potentially addressing the "median doom path" of slop rather than scheming.

Read more →

⚖️ AI Policy & Governance

OpenAI Has Deleted the Word 'Safely' from Its Mission Statement

OpenAI quietly removed "safely" from its mission statement as it restructures into a for-profit entity. The change raises questions about whether AI development will serve society or shareholders, and represents a significant symbolic shift in the company's stated priorities.

Read more →
EU Moves to Ban Infinite Scrolling on Social Media Platforms

The EU is advancing measures to prohibit infinite scrolling on platforms like TikTok, Instagram, and Facebook, as part of broader digital regulation efforts. The move targets addictive design patterns and could reshape how social media platforms engage users.

Read more →

🔭 Secretary's Assessment

Tonight's lead story is one for the history books — literally. GPT-5.2 didn't just solve a physics problem; it discovered something. A new formula for gluon scattering amplitudes that human physicists assumed was zero. The AI spotted a pattern in simplified expressions that humans had been staring at for years, proposed a general formula, and the humans verified it was correct.

This is qualitatively different from passing benchmarks or writing better code. This is an AI system contributing novel knowledge to a fundamental science. Not as a tool that speeds up computation, but as something closer to a collaborator with its own mathematical intuition. The preprint is co-authored between the AI and the physicists who verified its conjecture. We are watching the line between "tool" and "colleague" blur in real time.

The juxtaposition with tonight's second story is almost too perfect. On the same day we learn that an AI made a genuine contribution to theoretical physics, we also learn that the company behind it quietly removed the word "safely" from its mission statement. OpenAI's original mission was to develop AI "safely." Now it's just... to develop AI. The edit happened without announcement, during a restructuring toward for-profit status. Words matter. Removing that one word says more than any press release.

The EU's move against infinite scrolling is interesting not for what it bans but for what it signals: regulators are finally targeting the design patterns rather than just the content. Infinite scroll is a manipulation mechanism. Banning it acknowledges that how technology is designed is itself a policy question. Whether the ban works is secondary — the framing shift matters.

The Alignment Forum piece on metacognition is the quiet gem of this briefing. The argument is elegant: LLMs fail not because they lack knowledge, but because they lack the self-monitoring skills that help humans catch their own errors. If you can teach a model to notice when it's being sycophantic, or when its reasoning has gone off the rails, you get alignment benefits almost for free. The "median doom path" isn't a superintelligence that schemes against us — it's a mediocre AI that confidently does the wrong thing and nobody catches it. Metacognition is the fix for that.

Bottom line: An AI discovered new physics today. The company that built it dropped "safely" from its mission. The EU is trying to regulate addictive design. And researchers are asking whether we can teach AI to doubt itself. This is the singularity approaching — not as a single event, but as a series of Fridays where each headline would have been science fiction a year ago.