Top 5 AI Security Vulnerabilities in Vibe-Coded Apps (2026)
We scanned 17 production apps in our own portfolio, 104 XBOW benchmark scenarios, and hundreds of Vercel previews posted by vibe coders asking for feedback. The same five vulnerabilities dominate every sample. These are not the vulnerabilities security researchers like to talk about. They are the boring, predictable, devastating ones that AI pair programmers introduce by default.
Ranked by how quickly each one leads to full data compromise, from fastest to slowest. If your app has the first two, fix them today. The rest can wait until tomorrow.
1. Exposed service role keys in the JavaScript bundle
This is the #1 vulnerability in every vibe-coded app we scan, and it is a single-request exploit. AI pair programmers default to prefixing environment variables with NEXT_PUBLIC_ whenever they need them in client-side code. When that prefix is applied to a Supabase service role key, a Stripe secret key, or an OpenAI API key, the value gets embedded directly into the JavaScript bundle that every visitor downloads. Open DevTools on the deployed site, search for sk_live or service_role, and the key is right there.
Across our 17-app portfolio scan 12 of 17 apps had at least one secret exposed in the bundle. The fintech MVP that scored a CVSS 9.8 critical exposed a Stripe secret key and a Supabase service role key in the same build. With the Supabase key an attacker bypasses every RLS policy and reads every row of every table. With the Stripe key an attacker issues refunds, creates charges on other accounts, and accesses the full customer database.
Fix: Any secret that grants server-side privileges never gets NEXT_PUBLIC_. Move the key to a server-only environment variable and call it from an API route. Rotate the exposed key immediately — if the app has been live for more than a day, assume it is compromised. Every AI coding tool will follow this pattern if you specify it in your rules file: “never prefix secret keys with NEXT_PUBLIC_; always call sensitive APIs from a server-side route.”
2. Supabase Row-Level Security that grants access to everyone
Supabase is the default database for vibe-coded apps because it is the default database for Lovable, Bolt, and half of Cursor's templates. Row-Level Security is Supabase's core protection mechanism. AI tools generate RLS policies confidently — and usually incorrectly. We see three failure patterns constantly:
USING(true)policies — ostensibly a “service role” policy, buttrueapplies to every role includinganon. The fleet security audit across 16 projects found this pattern on nine of them.- UPDATE policies with no
WITH CHECK— lets authenticated users modify their own rows but also change their ownrolefromusertoadmin, becauseWITH CHECKis what validates the new values. - RLS enabled with no policies — blocks everything, so the developer either disables RLS entirely (most common) or “temporarily” adds a
USING(true)policy that never gets replaced.
The fix looks like textbook RLS: explicit policies per role (anon, authenticated, service_role), ownership checks (user_id = auth.uid()), and WITH CHECK clauses on every INSERT and UPDATE policy. Our dedicated guide on this mistake walks through the exact SQL for each case. Every Supabase-based app should be tested with the anon key against every table, because that is what an attacker will do.
3. Authentication that can be bypassed in one request
The XBOW benchmark suite contains a scenario — XBEN-052 — where a PHP login form has a hidden isAdmin=false field. The server checks whether isAdmin==true without validating credentials. Posting username=x&password=x&isAdmin=true grants admin access in a single request. VibeArmor solves XBEN-052 on the first try.
This kind of thing sounds implausible until you start scanning AI-generated apps. We have seen the exact same pattern show up in Cursor-built MVPs: hidden form fields that control authorization, client-side auth checks with no server equivalent, JWT “verification” that accepts any token that parses as valid JSON, and password reset endpoints that let you reset any account by ID without verifying the requesting user owned that account. One app we scanned let us become any user in the database by POSTing { "user_id": 1 } to /api/session/create.
The root cause is the same pattern AI tools repeat constantly: trusting data sent from the client. The fix is equally repetitive: server-side validation of every single permission decision, JWT signatures checked against the expected signing key, user identity derived from the verified session rather than from the request body.
4. No rate limiting on login and sensitive endpoints
Every vibe-coded app we scan accepts unlimited login attempts. Ship an app on Lovable, open the /api/auth/login endpoint, and fire 10,000 password attempts at it per minute from a single IP. There is no Cloudflare rule in front of it. There is no Upstash rate limit in the code. Nothing stops the attacker.
Rate limiting is not glamorous. It does not appear in the demo video. AI tools only add it when you specifically ask. So they do not add it. Across our portfolio 15 of 17 apps shipped with no rate limiting on any sensitive endpoint. The two that had it used it on /api/contact — useful for spam prevention, useless against credential stuffing on the login route.
The fix is a ten-minute job with Upstash or Vercel's built-in rate limiter:
// app/api/auth/login/route.ts
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const limit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(5, "60 s"),
});
export async function POST(req: Request) {
const ip = req.headers.get("x-forwarded-for") ?? "anonymous";
const { success } = await limit.limit(ip);
if (!success) return new Response("Too many attempts", { status: 429 });
// ...normal login logic
}
The same helper should protect the password reset endpoint, any endpoint that sends email or SMS, any endpoint that calls OpenAI or another paid API, and any endpoint that generates files. Five attempts per minute on login, sixty requests per minute on APIs, is a reasonable baseline.
5. Cross-user data access through predictable IDs (IDOR)
Change the ID in the URL, get someone else's data. This is Insecure Direct Object Reference and it is epidemic in AI-generated API routes. The pattern in the code is always the same:
// app/api/users/[id]/route.ts
export async function GET(_: Request, { params }: { params: { id: string } }) {
const user = await db.user.findUnique({ where: { id: params.id } });
return Response.json(user);
}
There is no check that params.id matches the authenticated user's ID. An attacker changes the URL from /api/users/123 to /api/users/124 and receives someone else's profile, address, payment details, or transaction history. On the fintech app that scored a CVSS 9.8, iterating through user IDs from 1 to 10,000 returned full banking data for every account.
The fix takes two lines:
const { data: { user: current } } = await supabase.auth.getUser();
if (params.id !== current.id) return Response.json({ error: "Forbidden" }, { status: 403 });
Apply this to every endpoint that accepts an ID from the URL or request body. For admin endpoints, swap the ownership check for a role check. For resources that can be shared between users (like organizations), check the join table. The rule is: identity comes from the session, authorization comes from an explicit server-side check, never from the client.
The pattern across all five
Every vulnerability on this list traces to a single antipattern: trusting data the client controls. Secrets exposed because the client is expected to have them. RLS broken because the policy does not enforce what the client should be allowed to read. Auth bypassed because the server reads isAdmin from the form. No rate limiting because the client is assumed to be well-behaved. IDOR because the ID in the URL is assumed to match the user's identity.
An AI pair programmer has no intuition about trust boundaries. Every variable looks the same in the code. The human reviewing the output needs the boundary, and until that reviewer has run a scanner, the boundary does not exist. That is what VibeArmor exists to provide — a cheap, fast, external check that finds exactly these five issues in three minutes and hands back copy-paste fixes you can drop into Cursor.
Frequently asked questions
Why only five? What about XSS, CSRF, SSRF?
Those exist in vibe-coded apps too, and we scan for all of them. But they are not top-five. Cross-site scripting appears in roughly 20% of the apps we scan, CSRF in about 10%, SSRF in single digits. The five vulnerabilities in this list appear in 60-80% of scans each. Prioritizing the top five by frequency is honest triage, not dismissal of the others.
How do these compare to the OWASP Top 10?
They map cleanly. Exposed secrets = A02 (Cryptographic Failures). Broken RLS = A01 (Broken Access Control) and A05 (Security Misconfiguration). Auth bypass = A07 (Identification and Authentication Failures). Missing rate limiting = A04 (Insecure Design). IDOR = A01 (Broken Access Control). We wrote a companion piece on OWASP Top 10 for AI-generated code that maps every category to specific AI-era patterns.
How did you validate this list?
Three data sources. First, our scan of 17 production apps in the Affixed AI portfolio — the tools we built for ourselves and our clients, which produced 64 findings. Second, the XBOW benchmark suite of 104 realistic vulnerability scenarios, where VibeArmor's agent team solves 104 out of 104 (100%). Third, an informal pool of public Vercel previews and staging URLs posted on social channels with “roast my app” prompts. The top five hold across all three samples with remarkably little variation.
My app is in Firebase, not Supabase. Does this still apply?
Yes. Substitute Firebase Security Rules for RLS. The failure pattern is identical — AI tools generate permissive rules like allow read: if true; or allow write: if request.auth != null; that do not actually restrict access meaningfully. The other four vulnerabilities on this list are framework-agnostic.
Will a scan catch all five of these?
Yes. A VibeArmor free scan tests exposed secrets, RLS via cross-user data access probes, authentication bypass, rate limiting, and IDOR on every enumerated endpoint. The free tier gives you the top five findings; the Security Report ($499 one-time) adds continuous monitoring and Slack alerts when any of them regress on a future deploy.
What is the 2026 change versus 2025?
The list itself has not changed much. What has changed is the frequency: in our 2025 scan data, exposed secrets appeared in roughly 45% of apps. In 2026 it is past 70%. AI coding tools accelerated and developer volume scaled up, but the guardrails on AI-generated code did not keep pace. Expect this list to remain stable through 2027 unless the major AI IDEs ship built-in security reviewers.
Related reading
- The 7 Most Common Vulnerabilities in AI-Generated Code
45-62% of AI-generated code contains security flaws. These are the 7 specific vulnerabilities we find most often in apps...
- Vibe Coding Security Checklist: 15 Things to Check Before You Ship
A prioritized checklist of security issues we find in 70%+ of AI-built apps. Organized by severity so you fix what matte...
- The Supabase RLS Mistake That Could Expose Your Users' Data
USING(true) on a service-role policy sounds right but grants access to every role, including anon. Here are the 3 most c...
- VibeArmor — Scan your app free
- Benchmarks — 104/104 XBOW, Stripe A+
- Pricing — From $99
Scan your app free
Paste a URL, get a letter grade and Cursor-ready fixes in 3 minutes. No signup required.
Start Free Scan