How to Fix XSS in AI-Generated Code: A Practical Guide for 2026
Cross-site scripting (XSS) is the #3 most common vulnerability in AI-generated code. It shows up because AI tools reproduce patterns from tutorials where user input gets rendered directly into the page. This guide walks through how to find XSS in your AI-generated app and how to fix it without rewriting everything.
By the end, you will know the four places AI tools introduce XSS, how to test for each, and the exact prompt to give back to Cursor/Lovable/Bolt to fix the issue properly.
What XSS Actually Is (90 Seconds)
XSS happens when your app takes user input and renders it as HTML or JavaScript. If a user types <script>alert(1)</script> into a comment box and your app displays it without escaping, the browser executes the script instead of showing the text.
In practice, attackers do not use alert(1). They use scripts that read the victim's cookies, harvest form input, or send data to a collection server. Any authenticated user who views the attacker's content becomes compromised. The damage scales with how many users view poisoned content.
XSS comes in three flavors:
- Stored XSS: Attacker's script is saved in your database and rendered to every user who views it. Worst case.
- Reflected XSS: Attacker's script is in a URL parameter and runs when a victim clicks the link. Common in search pages.
- DOM XSS: Attacker's input gets written to the DOM by client-side JavaScript. Common in SPA frameworks when developers bypass the framework's built-in escaping.
Where AI Tools Introduce XSS
We have scanned thousands of AI-built apps. XSS shows up in four specific patterns, in order of how often they cause real exploits:
Pattern 1: dangerouslySetInnerHTML with User Data
React escapes content by default. The only way to break that protection is dangerouslySetInnerHTML. When AI tools need to render HTML (markdown previews, rich text editors, email preview panes) they reach for this escape hatch and often forget to sanitize.
// VULNERABLE — AI often generates this
<div dangerouslySetInnerHTML={{ __html: post.content }} />
// SAFE — sanitize first
import DOMPurify from 'isomorphic-dompurify';
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(post.content) }} />
If post.content came from a user (any user, not just admins), it needs sanitization. Rules like "only admins can post rich text" are not defenses — attackers compromise admin accounts or abuse insider access.
Pattern 2: Rendering URL Parameters Without Escaping
Search pages are the classic example. AI tools generate code like:
// VULNERABLE
const query = searchParams.get('q');
return <h1>Results for: {query}</h1>; // React escapes this, SAFE
return <div dangerouslySetInnerHTML={{ __html: `Results for: ${query}` }} />; // UNSAFE
In React specifically, {query} is escaped automatically — that is safe. The exploit path is when AI tools build HTML strings manually (for emails, PDFs, server-rendered markdown) and skip the escape.
Pattern 3: document.write and innerHTML Assignment
Vanilla JavaScript patterns that bypass framework protections. AI tools sometimes reach for these when generating "simple" utilities:
// VULNERABLE
document.getElementById('greeting').innerHTML = 'Hello ' + userName;
document.write('<h1>' + title + '</h1>');
// SAFE
document.getElementById('greeting').textContent = 'Hello ' + userName;
textContent treats the input as text, not HTML. Use it whenever you do not actually need HTML features.
Pattern 4: Unescaped Server-Side Template Injection
When AI tools generate server-side templates (Handlebars, EJS, or custom), they sometimes use the raw-output syntax instead of the escaping syntax:
// VULNERABLE Handlebars
<p>{{{userComment}}}</p> // Triple braces = raw HTML
// SAFE
<p>{{userComment}}</p> // Double braces = escaped
Similar distinction in EJS (<%- %> is raw, <%= %> is escaped), Twig, and other engines.
How to Test Your App for XSS
Step 1: Find Every Input Field
Make a list of every place users can enter data that other users will see: comments, posts, profile bios, messages, review text, search queries that get displayed, filenames on file uploads.
Step 2: Inject the Canonical Test Payload
In every input field, try: <img src=x onerror=alert('xss')>
This is safer than <script> because most modern browsers block inline scripts in certain contexts, but onerror handlers trigger reliably. If you see an alert box when viewing the content, you have XSS.
Step 3: Check the Rendered HTML
Even if no alert fires, view the page source. If your test payload appears as <img src=x onerror=alert('xss')> (not escaped to <img...), you have stored XSS waiting for the right browser context.
Step 4: Test URL Parameters
Add ?q=<img src=x onerror=alert(1)> to every search or filter URL. Check if the parameter gets rendered.
Step 5: Run a Scanner
Manual testing catches obvious cases. Automated scanning catches the patterns you did not think to test. VibeArmor's scanner tests every reachable input field against 40+ XSS payload variations, including filter bypass techniques that common input sanitizers miss.
The Fixes by Stack
React / Next.js
Default behavior is safe — {variable} escapes automatically. The only vulnerabilities come from:
- dangerouslySetInnerHTML — wrap all inputs in
DOMPurify.sanitize() - href={userUrl} — validate the protocol is
http:,https:, ormailto:. Blockjavascript:. - srcSet or src from user input — same protocol validation
Vue
Same principles: {{variable}} is safe, v-html is the danger zone. Sanitize any content passed to v-html.
Vanilla JavaScript
Prefer textContent over innerHTML. If you must use innerHTML, sanitize first with DOMPurify. Never use document.write with user input.
Server-Side Templates
Use the escaping syntax ({{var}}, <%= var %>) by default. Only use raw output for content you fully control (like hardcoded HTML strings).
Defense in Depth: Content-Security-Policy
Even if your app has an XSS hole, CSP can stop the exploit. A strict CSP blocks inline scripts and restricts where scripts can load from. An attacker who can inject HTML into your page cannot actually run JavaScript if CSP is configured well.
Minimum CSP to block common XSS:
Content-Security-Policy: default-src 'self'; script-src 'self'; object-src 'none'; base-uri 'self';
Add this header via middleware. You will need to adjust for your actual dependencies (Supabase, Stripe, analytics), but start strict and loosen only where necessary.
The Fix Prompt for Your AI Tool
When you hand a finding back to Cursor, Lovable, or Bolt, a vague "fix the XSS" prompt often produces incomplete fixes. Use this instead:
Find every place user-controlled data is rendered as HTML. This includes:
- dangerouslySetInnerHTML usage
- innerHTML and document.write calls
- Template engine raw-output syntax (triple braces, <%- %>, etc.)
- href, src, and srcSet attributes that accept user input
For each instance:
1. If HTML rendering is not actually needed, switch to text rendering
2. If HTML rendering IS needed, wrap the input in DOMPurify.sanitize() with an allowlist of safe tags
3. For URL attributes, validate the protocol is http:, https:, or mailto:
Also add a Content-Security-Policy header that blocks inline scripts.
After the fix, walk me through each change and explain why it is safe.
The last sentence is important. Forcing the AI to explain the fix catches cases where it sanitized the wrong thing or missed an input path.
Verify the Fix Landed
Re-run your test payloads against every input field. Confirm the scan finding moves from "XSS confirmed" to "not detected." Check the HTML source to ensure payloads are now escaped (<img...) rather than raw.
Run a fresh VibeArmor scan. The scanner includes both the obvious XSS patterns and filter bypass variations that test whether your sanitizer is complete or just cosmetic.
Scan your app for XSS and 119 other checks →
XSS Is Not Dead, But It Is Solvable
In modern React/Vue/Svelte apps, XSS is genuinely easier to avoid than it was in the PHP era. The framework defaults are safe. The problem is that AI tools reach for the unsafe escape hatches (dangerouslySetInnerHTML, v-html, raw innerHTML) more often than a cautious human would. Once you know the patterns, they are easy to audit.
The failure mode is treating XSS as a problem you do not need to think about because "React handles it." React handles the common case. The uncommon case (rich text, markdown, email preview) is where every XSS vulnerability we find lives.
For broader context, see the full list of the 7 most common AI-generated code vulnerabilities and the OWASP Top 10 mapped to AI-generated code.
Frequently Asked Questions
Does React/Next.js fully prevent XSS?
React prevents XSS in the default rendering path ({variable} in JSX). It does not prevent XSS when you explicitly opt out via dangerouslySetInnerHTML, when you use href={userUrl} without protocol validation, or when you render user content through third-party libraries that do their own HTML generation. The framework's defaults are safe. The escape hatches need manual protection.
Is DOMPurify enough, or do I need a Web Application Firewall?
For most apps, DOMPurify plus a Content-Security-Policy header is sufficient. WAFs add another layer but they catch patterns, not specific exploits, and they introduce false positives. The primary defense is not rendering untrusted HTML as HTML. WAFs are useful for Tier 2 defense in depth once the application-layer fixes are in place.
What if my AI tool keeps regenerating the vulnerable pattern?
This is common with Cursor and Bolt when you ask for features that involve rich text or user-generated content. The pattern they reproduce is "use dangerouslySetInnerHTML" because that is what the training data shows. When you prompt, specify the safe pattern explicitly: "use DOMPurify.sanitize() around any HTML rendering" or "render as plain text with whitespace preservation." You may need to restate this every time you ask for a feature in that area.
Can I just block <script> tags in user input?
No. Attackers bypass script tag blocks with event handlers (onerror, onload), javascript: protocol URLs, SVG payloads, and dozens of other vectors. The correct approach is either full HTML sanitization (allowlist of safe tags and attributes via DOMPurify) or rendering as text. Never try to write your own XSS filter — bypass techniques are a large subfield of security research and your regex will miss something.
Do single-page apps have less XSS risk than traditional server-rendered sites?
They have different XSS risk. SPA frameworks (React, Vue) make common XSS harder because their default rendering is safe. But DOM-based XSS (where client-side JavaScript writes user input to the DOM) is more common in SPAs because there is more client-side logic. Net risk is similar — the vulnerability shifts from server templates to client-side rendering functions.
Related reading
- The 7 Most Common Vulnerabilities in AI-Generated Code
45-62% of AI-generated code contains security flaws. These are the 7 specific vulnerabilities we find most often in apps...
- 5 Security Fixes Every Vibe Coder Should Know
Your AI-built app probably has at least one of these vulnerabilities. Here are 5 Cursor-ready fixes you can paste right ...
- Vibe Coding Security Checklist: 15 Things to Check Before You Ship
A prioritized checklist of security issues we find in 70%+ of AI-built apps. Organized by severity so you fix what matte...
- Vibe Coding Security — Complete Guide
- Free Scan — Test Your App
Scan your app free
Paste a URL, get a letter grade and Cursor-ready fixes in 3 minutes. No signup required.
Start Free Scan