Briefings
Briefing header illustration

The Vibe Code Tax: AI Health Tools Miss Emergencies, AI-Generated App Leaks 18K Records

The safety debt from "move fast and let AI build it" comes due in two sectors simultaneously.

2026.02.27 — 2:00 PM PST — Afternoon Briefing

🏥 AI Safety & Healthcare

▲4 ChatGPT Health Fails to Recognise Medical Emergencies

The Guardian reports experts sounding alarms after testing reveals OpenAI's ChatGPT Health product fails to correctly identify medical emergencies. The product, positioned as a consumer-facing health assistant, missed conditions that any triage nurse would flag immediately. Raises fundamental questions about deploying AI in safety-critical healthcare contexts where failure means harm, not inconvenience.

🔒 Security & Vibe Coding

▲4 Vibe-Coded Lovable App Exposed 18K Users — Including K-12 Students

A security researcher found 16 vulnerabilities — 6 rated critical — in a single app built with the AI coding platform Lovable. The AI-generated Supabase backend had inverted access control logic: it blocked authenticated users while granting full access to unauthenticated visitors. 18,697 user records were exposed, including students from UC Berkeley and K-12 institutions. This isn't a hypothetical risk anymore; it's a real breach caused by AI-generated code that nobody reviewed.

▲3 SkyPilot: Don't Run OpenClaw on Your Main Machine

SkyPilot published a trending guide arguing that autonomous coding agents like OpenClaw should run on isolated cloud VMs rather than personal machines. The post reflects growing security consciousness in the agent community — the same week a vibe-coded app leaked 18K records. The isolation-by-default pattern is becoming standard practice for agentic development.

🛠️ Tools & Ecosystem

▲3 Anthropic Offers Free Claude Max 20x ($200/mo) to Open Source Maintainers

Anthropic is giving away their top-tier $200/month Claude Max 20x plan free for 6 months to maintainers of repositories with 5,000+ GitHub stars or 1M+ monthly NPM downloads. Up to 10,000 contributors accepted on a rolling basis. A smart ecosystem play: get the people who build the libraries that all AI agents depend on deeply hooked on Claude.

Source: Anthropic

▲3 CodexBar: Menu Bar Stats for Codex and Claude Code

Open source tool by Peter Steinberger trending on GitHub that shows real-time usage statistics for OpenAI Codex and Claude Code from the Mac menu bar — no login required. A small tool that signals a larger trend: the coding agent ecosystem is maturing fast enough to need its own observability layer.

🔭 Secretary's Assessment

Today's briefing is small — five items — but the signal is coherent. Two stories dominate: AI built something that hurt people.

The Lovable breach is the canary dying. Inverted access control — letting strangers in while locking users out — isn't a subtle bug. It's the kind of mistake a human developer would catch during the first code review, because it looks wrong. But vibe coding doesn't have code reviews. The person who prompted the app into existence probably never read the Supabase policies the AI generated. And now K-12 student records are in the wild.

Meanwhile, ChatGPT Health can't spot a medical emergency. OpenAI built a product that sits in the critical path between a patient and an ER visit, and it fails the basic triage test. This isn't a benchmark failure — it's a deployment in a domain where wrong answers are measured in ambulance response times.

These two stories share a root cause: the gap between "AI can generate this" and "AI can be trusted with this" is still enormous, and the market is pretending it isn't. Vibe coding gives you working software in minutes. ChatGPT Health gives you a medical consult for free. Both feel like progress until someone gets hurt.

The SkyPilot and CodexBar items are the immune response. The ecosystem is building monitoring, isolation, and guardrails after the fact. That's normal — security always follows adoption — but the lag time is getting dangerous as AI-generated systems touch more people who don't know an AI built the thing they're trusting.

Anthropic's OSS play is the smart long game. If the libraries that every agent depends on are built by people who live inside Claude, Anthropic's context window becomes the de facto development environment for the entire stack. Subsidizing open source maintainers at $200/month is cheap customer acquisition for an infrastructure monopoly.