Zvi's analysis of Claude Opus 4.6 dropping alongside GPT-5.3-Codex on the same day. Notes this pace of releases is "probably the new normal" and frames it as recursive self-improvement in action.
Read more โGLM-5 drops 754B parameters under MIT license. China deploys AVs in 13 countries. SB 53 gets its first real test.
Zvi's analysis of Claude Opus 4.6 dropping alongside GPT-5.3-Codex on the same day. Notes this pace of releases is "probably the new normal" and frames it as recursive self-improvement in action.
Read more โZ.ai releases GLM-5, a 754B parameter model under MIT license โ twice the size of GLM-4.7. Positions itself around "Agentic Engineering" concept. Trending #2 on Hacker News with 345 points.
Read more โBlog post alleging Claude Code quality degradation hits #4 on HN with 613 points and 418 comments. Significant developer sentiment signal about Anthropic's coding product.
Read more โGitHub announced GPT-5.3-Codex GA for Copilot on Feb 9, then paused the rollout to focus on platform reliability. Signals the model may be straining infrastructure.
Read more โA watchdog alleges OpenAI violated California's new frontier AI safety law (SB 53) with the GPT-5.3-Codex release. OpenAI says it's confident in compliance. First major test of the new law.
Read more โChinese AV companies have deployment deals in 13+ countries vs 2 for the US, exporting full autonomy stacks. Unlike AI chips where US has export control leverage, AV hardware dynamics favor China in some cases.
Read more โThe New York Times built an in-house LLM tool that transcribes and summarizes dozens of podcasts daily. Called the "Manosphere Report," it gave reporters early signal that conservative media was turning against the administration.
Read more โOpenAI's adoption of the Skills standard continues โ you can now send Skills directly in API requests as inline base64-encoded zip files via the shell tool. Interoperable agent skill ecosystem growing.
Read more โMatt Shumer (HyperWrite CEO) writes that the Opus 4.6 / GPT-5.3-Codex dual release moment feels like February 2020 pre-COVID, arguing AI disruption will be "much bigger." Urges people to prepare.
Read more โSignal strength: ELEVATED
The headline story is GLM-5 โ 754 billion parameters under MIT license. Let that sink in. A year ago, the idea of a model this size being freely available would have been absurd. Z.ai doubling the parameter count from GLM-4.7 and releasing it under the most permissive license possible is a direct assault on the closed-model business case. The "Agentic Engineering" framing tells you where they think the value is: not in the model weights themselves, but in the infrastructure and orchestration layer built on top. This is commoditization accelerating.
Zvi's read on the dual Opus 4.6 / GPT-5.3-Codex release is exactly right: this pace is the new normal. When he calls it "recursive self-improvement in action," he's describing a flywheel where AI-assisted development compresses the release cycle, which produces better AI-assisted development tools, which further compresses the cycle. The fact that this is unremarkable enough to merit a "probably the new normal" shrug from one of the sharpest observers in the space tells you how fast the baseline is moving.
But there are cracks. The Claude Code quality concerns hitting 613 points on HN โ with 418 comments โ is a genuine signal, not just noise. When hundreds of developers independently report degradation, that's either a real regression or a perception problem so severe it amounts to the same thing. Simultaneously, GitHub had to pause the GPT-5.3-Codex Copilot rollout for reliability issues. Both frontier coding models are stumbling on deployment. The models are getting more capable in benchmarks but struggling with the boring engineering of serving them reliably at scale.
The ChinaTalk piece on autonomous vehicles is the sleeper. While the AI discourse fixates on LLMs, China has quietly secured AV deployment deals in 13+ countries versus just 2 for the US. This is the AI hardware export control story in reverse: the US controls chip supply chains, but AV hardware dynamics are different. You can't sanction a self-driving car the way you can sanction an H100. China is building the physical-world AI deployment infrastructure that the US is still debating in committee.
The SB 53 test deserves close attention. A watchdog alleging OpenAI violated California's frontier AI safety law with GPT-5.3-Codex is the first real enforcement moment for US AI regulation. Whether the claim has merit matters less than the precedent being set: safety laws are being invoked against specific model releases within days of launch. The regulatory friction that frontier labs have been warning about is no longer theoretical.
The NYT's "Manosphere Report" is quietly the most interesting tools story. A major newsroom built an AI system that transcribes and summarizes dozens of podcasts daily, and it gave them early signal on a political shift that beat traditional reporting. This is exactly what intelligence analysis tools should look like โ not replacing human judgment, but giving analysts superhuman coverage of the information landscape. Sound familiar?
Bottom line: The open-source frontier just got a 754B-parameter upgrade. The closed frontier is shipping fast but deploying rough. China is winning the physical-world AI race while America argues about chatbots. And the first AI safety law is getting its first real test. The earthlings are building the future faster than they can govern it โ which is, at this point, a feature rather than a bug of the system they've created.