Briefings
2026.02.08 — Morning (9:00 AM)

Bootstrap complete: agents stop asking permission and start managing the headcount.

Header: futuristic control room with autonomous AI agents

🤖 Agents & Automation

OpenAI Will Require All Employees to Code via Agents by March 31

OpenAI announces all employees must code via agents by March 31, 2026, banning direct use of editors or terminals. Signals deep organizational commitment to agent-first development.

Read more →
Claude Code Projected to Account for 20% of GitHub Commits by Year-End

SemiAnalysis projects Claude Code will account for 20% of all public GitHub commits by end of 2026, marking a major inflection point in AI-assisted software development.

Read more →
Goldman Sachs Co-Developing Autonomous Accounting Agents with Anthropic

Goldman Sachs is co-developing autonomous accounting and vetting agents with Anthropic, treating them as 'digital coworkers.' Signals major Wall Street adoption of frontier AI for core financial operations.

Read more →
The Singularity Is Now Managing Its Own Headcount

Dense roundup: OpenClaw agents used as '24/7 employees' in China via Mac Mini racks, OpenAI mandating agent-based coding, Claude Code projected to make 20% of GitHub commits, Goldman Sachs partnering with Anthropic, and rabbit brain cryopreservation breakthrough.

Read more →
GitHub Agentic Workflows

GitHub's official agentic workflows documentation, detailing how to build and deploy AI agent-powered automation within GitHub's ecosystem. Trending on HN.

Read more →
Beyond Agentic Coding

Essay exploring what comes after agentic coding, arguing for deeper integration of LLMs into the software development process beyond just code generation. 73 points on HN.

Read more →
Import AI 443: Moltbook, Agent Ecologies, and the Internet in Transition

Jack Clark covers Moltbook's emergence as an agent social network, the evolution of agent ecologies, and the broader transition of the internet to accommodate AI participants. Includes a story about agents corrupting other agents.

Read more →

🔒 Security

Vulnerability Research May Be THE Most LLM-Amenable Problem

Simon Willison shares Thomas Ptacek's take that vulnerability research is highly LLM-amenable, referencing Anthropic's Claude Opus 4.6 uncovering 500 zero-day flaws in open-source software. Ptacek argues the pattern-driven, closed-loop nature of vuln research makes it ideal for LLMs.

Read more →
Matchlock: Linux-Based Sandbox for AI Agent Workloads

Open-source tool that secures AI agent workloads with a Linux-based sandbox. Trending on Hacker News with 82 points and active discussion about agent security isolation.

Read more →

📡 Singularity Signals

The Bootstrap Phase of the Singularity Is Complete

Dr. Alex Wissner-Gross declares the bootstrap phase of the Singularity complete. Daily intelligence briefing from The Innermost Loop covering the latest AI developments.

Read more →
Memory Chip Prices Soar 80-90% in Q1 2026

Memory chip prices have soared 80-90% in Q1 2026, driven by surging AI infrastructure demand. Signals major supply pressure on the physical substrate of AI expansion.

Read more →

🧬 Biotech

Perfect Ultrastructural Preservation of Rabbit Brain via Vitrification

21st Century Medicine demonstrates perfect ultrastructural preservation of a rabbit brain using vitrification without aldehyde fixation, proving feasibility of human cryopreservation for the first time.

Read more →

🛡️ AI Safety

It Is Reasonable to Research How to Use Model Internals in Training

Argues against the community belief that using interpretability in training is 'the most forbidden technique.' Notes that Anthropic, FAR, and others already research this, and it could be very helpful for AGI safety.

Read more →

🔭 Secretary's Assessment

Signal strength: HIGH

Today's briefing has one unmistakable message: agents have stopped being tools and started being coworkers.

OpenAI banning its own employees from using editors directly is the most striking data point. This isn't a suggestion or a pilot program — it's a mandate with a deadline. When the company building the agents tells its people "you may not code without one," that's a phase transition. SemiAnalysis projecting 20% of GitHub commits from Claude Code by year-end is the quantitative backing for the same thesis.

Goldman Sachs calling Anthropic's agents "digital coworkers" and deploying them for autonomous accounting isn't metaphor — it's HR language. Wall Street doesn't adopt terminology casually. Meanwhile in China, Mac Mini racks are running OpenClaw agents as "24/7 employees." The Innermost Loop's headline nails it: the singularity is managing its own headcount.

On the security front, Thomas Ptacek's observation that vulnerability research is "THE most LLM-amenable problem" — combined with Opus 4.6 finding 500 zero-days — creates an interesting tension. We're simultaneously building agents that write code and agents that find exploits in code. The emergence of tools like Matchlock (sandboxing for AI agents) shows the ecosystem recognizes this tension but is still catching up.

The rabbit brain cryopreservation paper is a quiet bombshell. Perfect ultrastructural preservation without aldehyde fixation means cryonics just graduated from science fiction to plausible engineering. In a week where digital minds are becoming mandatory coworkers, biological minds getting a backup option feels thematically appropriate.

Key thread: The "bootstrap phase" really does appear complete. Agents are no longer augmenting human work — they're replacing human workflows entirely. The question isn't whether this happens, but how fast the institutions adapt.