Your MVP's security is a bamboo airport
After WWII, Pacific islanders built bamboo control towers and wooden headphones, waiting for cargo planes that had stopped coming — and that's roughly the state of security in most AI-coded MVPs.
On a Pacific island after the war, a man sat in a wooden hut and waited for airplanes that never came.
He was wearing headphones carved from coconut shell. Two pieces of bamboo were strapped to his head as antennas. The hut was a control tower. Outside, a runway had been cut from the jungle, lit at night by torches, with a life-sized wooden airplane parked at one end. Other islanders, somewhere nearby, were marching in formation carrying bamboo rifles. Everything was in order.
Anthropologists who arrived in Melanesia in the years after the Second World War found dozens of these scenes. American troops had landed in 1942 with cargo planes that spilled out Spam, medicine, jeeps, and Hershey bars, and in 1945 they left. The cargo stopped. So the islanders, reasonably enough, tried to do what they had seen the soldiers do. They had observed extremely carefully. The form was perfect.
In 1974, Richard Feynman explained the phenomenon at the Caltech commencement. He summed it up as: the form is perfect, but it doesn't work.
I bring this up because most of the apps I look at lately have bamboo airports for security.
Specifically: MVPs with the directory structure, the package.json, the GitHub badges, the Vercel deploy, the README with emojis — and that, opened up from the inside, are missing the actual security parts. The form is perfect. The cargo isn't coming.
The README is real. The auth isn't.
Here is the recurring inventory. The auth check that lives only on the frontend. The isAdmin boolean read from localStorage. The .env.production file committed because the .gitignore rule said .env and not .env.. The SQL query built with string interpolation because the AI assistant was friendlier that way. The API route that works without an auth header because someone forgot a middleware. The CORS policy set to for "development" and never changed. The LLM endpoint that any logged-out user can call as many times as they want, in a loop, against your OpenAI bill.
This pattern is now measured, not just my impression. Veracode tested over a hundred LLMs on coding tasks across 2025 and into 2026 and found that about 45% of AI-generated code samples introduce OWASP Top 10 vulnerabilities. A separate analysis put the vulnerability density of AI-generated code at roughly 2.74 times that of human-written code. Georgia Tech's Vibe Security Radar tracked more CVEs from AI coding tools in March 2026 alone than it had across all of 2025.
These numbers don't say AI coding is bad or that anyone is bad for using it. They say that a fast, helpful, eager junior with no security background writes a lot of code, and that the code has all the things you would expect that junior to forget.
Even Anthropic shipped a bamboo airport
In case this sounds theoretical, the company that makes one of those coding tools shipped one to npm in March.
On March 31, 2026, Anthropic released Claude Code version 2.1.88 to the npm registry. Bundled inside the package was a 59.8 MB source map file that nobody intended to publish. A security researcher named Chaofan Shou noticed within hours. The map pointed at roughly 512,000 lines of unobfuscated TypeScript across 1,906 files — essentially the complete source of their flagship product. A community mirror on GitHub picked up tens of thousands of stars before the day was out.
The fix was one line in .npmignore. Anthropic's statement called it a release packaging issue caused by human error.
I'm not piling on. Claude Code is famously built largely by Claude Code; its own lead engineer has said publicly that essentially all of his contributions were written by the tool. The published package was beautiful. Minified, versioned, signed. Every check in the pipeline green. The form was perfect. The plane came down anyway. If this happens to the people who build the tool, it can happen to the 12-person team that bought a Cursor seat last week.
What I wanted to put here, and what I'm putting instead
I'm going to step out from behind the curtain for a second.
Originally, the bottom of this post had a section called What to do this week. Three bullets. Run gitleaks. Check that auth happens on the server. Put a billing alert on your LLM endpoint.
Then I re-read the post. I'd just spent eight hundred words explaining cargo cults — form without function, ritual without understanding — and I was about to hand you three rituals. Run this command. Click this checkbox. Set this alert. You would have done the three things and felt safer, and I would have built you a small bamboo airport with my name on it.
So I'm doing something different here, and probably in every post going forward. Instead of what to check, one thing worth understanding. The point is to put a model in your head — something you can use on code I haven't seen. This is the first one. There'll be others, in posts I haven't written yet.
How attackers actually look at your app
Most founders, when they think about security, picture their own app the way a customer sees it. Sign-up flow, login page, dashboard, settings. Could someone log in as the wrong person? Could someone see the wrong dashboard? All real questions. None of them are the question an attacker is asking.
An attacker does not use your UI. They open your site in a browser the first time to figure out what it is, and then they close it and never look at it again. The browser was for orientation. The actual work happens in a terminal, hitting your API directly, ignoring every form and button you built.
This is the single biggest gap between how a founder thinks about their app and how it gets broken. Your UI is for the legitimate users — the ones who play by the rules of the experience you designed. Attackers were never going to play. The login form is a polite suggestion to them. They go straight to POST /api/login and try ten thousand passwords without ever loading the page that has the form on it. The admin dashboard, the one you carefully hid behind a feature flag and a special URL, is irrelevant — they're calling GET /api/admin/users directly, and either it returns the data or it doesn't. The "Are you sure?" confirmation modal you spent a day designing? It does not exist in their world.
Once you see this, a lot of the inventory at the top of this post stops being a list of mistakes and starts being a single mistake repeated. The frontend isAdmin check fails because they never load your frontend. The .env.production in your repo fails because they're not browsing your site — they're searching GitHub for sk-. The unprotected /chat endpoint fails because they don't need your chat UI; they need your URL and a Python script. Everything you built for the user is invisible to them. Everything you didn't think to build behind it is the whole game.
Your job, when you're looking at AI-generated code, is to read it the way an attacker would. Not does this work when used correctly, but does this work when used incorrectly, on purpose, by someone who has read all of it and is trying to find the one endpoint that forgot the middleware. That is a different reading.
One question to carry with you: don't ask can a user do this in the UI? Ask what happens if someone calls this endpoint directly?
One last thing
The man in the wooden hut wasn't lazy. He was building, from the materials he had, the thing he had seen with his own eyes. The headphones were careful work. The runway was a real runway. He was doing his best, and his best had a real chance of bringing the planes back, because he had every reason to think the airport was the thing that summoned them.
Most vibe-coded apps are careful work too. They look exactly like the thing. They have the badges, the README, the package.json, the deploy pipeline. They're missing whatever part it is you only notice when something actually tries to land.