Cursor Security Vulnerabilities: What Every Developer Needs to Know in 2026
Cursor is the most popular AI code editor in 2026, used by hundreds of thousands of developers to ship apps faster than ever. It is also a growing attack surface. In the past year, multiple CVEs have been published against Cursor itself, and the code it generates carries predictable vulnerability patterns that attackers are learning to exploit. This article covers both categories: vulnerabilities in Cursor the tool, and vulnerabilities in the code Cursor writes for you.
Part 1: Vulnerabilities in Cursor Itself
Cursor inherits its architecture from VS Code, which means it inherits VS Code's trust model. But Cursor's AI agent features add new attack surface that VS Code does not have.
CVE-2025-59944: Case-Sensitivity Bypass
A case-sensitivity mismatch in Cursor's file protection allowed attackers to bypass file protections by using different letter casing. This let untrusted content modify configuration files, potentially leading to remote code execution. The vulnerability was patched in Cursor 1.7, but any developer running an older version remains exposed.
Impact: An attacker who can get a developer to open a malicious repository (via a pull request, open source contribution, or shared project) could execute arbitrary code on the developer's machine.
CurXecute and MCPoison (CVE-2025-54135, CVE-2025-54136)
Two vulnerabilities discovered by Tenable target Cursor's MCP (Model Context Protocol) server handling. CurXecute enables silent code execution through malicious MCP server configurations. MCPoison allows poisoning the MCP context to inject malicious instructions that the AI agent executes.
Impact: If a developer installs a malicious MCP server (which look like helpful developer tools), the attacker gains code execution on the developer's machine through Cursor's agent feature.
CVE-2026-22708: RCE via Shell Built-ins
Discovered in early 2026, this vulnerability affects Cursor versions prior to 2.3. Shell built-in commands can be executed without appearing in Cursor's allowlist. An attacker can use direct or indirect prompt injection to poison the shell environment by setting, modifying, or removing environment variables. This effectively turns Cursor's agent into an attack tool against the developer's own machine.
Impact: Critical. An attacker who can influence the AI's context (through a malicious README, code comment, or repository file) can execute arbitrary shell commands.
Open-Folder Autorun
Cursor ships with Workspace Trust disabled by default, unlike VS Code which prompts users before trusting a workspace. This means VS Code-style tasks configured with runOptions.runOn: "folderOpen" execute automatically the moment a developer opens a project in Cursor. No trust prompt. No confirmation.
Fix: Enable Workspace Trust in Cursor settings. Set task.allowAutomaticTasks: "off". Never open repositories from untrusted sources without reviewing their .vscode/tasks.json first.
Part 2: Vulnerabilities in Code Cursor Writes
The vulnerabilities in Cursor itself are serious but patchable. The more pervasive risk is the code Cursor generates. Based on our analysis of thousands of Cursor-built apps, these are the patterns we see repeatedly.
Exposed Secrets in Client Bundles
Cursor frequently generates code that uses environment variables with the NEXT_PUBLIC_ prefix for keys that should be server-only. We find Supabase service role keys, Stripe secret keys, and OpenAI API keys in client-side JavaScript bundles on roughly 30% of the Cursor-built apps we scan.
Why Cursor does this: When you prompt Cursor to "add Supabase" or "connect to Stripe," it generates the fastest path to working code. That often means importing the client directly in a React component, which requires the NEXT_PUBLIC_ prefix to make the variable available client-side. The code works. It also exposes your secret key to every visitor.
Broken Supabase Row Level Security
Cursor generates Supabase table creation code that frequently omits RLS policies entirely or creates policies with USING(true) that grant access to all roles including anon. The three most common RLS mistakes we document all appear regularly in Cursor-generated code.
More dangerous: Cursor generates UPDATE policies without WITH CHECK clauses. This allows users to escalate their own privileges by updating their role field from "user" to "admin". We have confirmed this exploit path in multiple production applications.
Unprotected Admin Routes
When you ask Cursor to "create an admin dashboard," it builds the dashboard. It rarely protects it. We consistently find /admin, /dashboard, and /api/admin/* routes that load without any authentication check. Cursor generates the UI and the API routes but does not add middleware or auth guards unless you specifically ask for them.
Missing Rate Limiting
Cursor-generated API routes almost never include rate limiting. This means an attacker can brute-force login endpoints at thousands of attempts per minute, or abuse your OpenAI integration to run up a $10,000 bill overnight. We see this on virtually every app we scan.
Prompt Injection in AI Features
When Cursor-built apps include their own AI features (chatbots, summarizers, content generators), the generated code rarely sanitizes user input before passing it to the LLM. This enables prompt injection attacks that can exfiltrate system prompts, bypass content filters, or cause the AI to perform unauthorized actions on behalf of the attacker.
Part 3: Dependency and Supply Chain Risks
Cursor's AI suggests packages based on popularity in its training data, not security posture. Studies show approximately 40% of AI-suggested dependencies include known vulnerabilities. The AI does not check npm advisory databases before recommending a package, and it often suggests specific versions from its training data that are now outdated.
Real risk: If Cursor suggests lodash@4.17.15 because that version was common in its training data, your app inherits every vulnerability patched in subsequent versions. The AI cannot distinguish between a popular package and a popular-but-compromised package.
Mitigation: Run npm audit after every Cursor session that adds dependencies. Set up Dependabot or Snyk on your repository. Never accept a dependency suggestion without checking its current version and advisory status.
How to Use Cursor Safely
Cursor is not unsafe. It is a powerful tool that, like any power tool, requires safety practices. Here is the protocol we recommend:
- Update Cursor immediately when new versions are released. The CVEs documented above are all patched in recent versions. Running outdated Cursor is running a known-vulnerable development environment.
- Enable Workspace Trust in Cursor settings. Set
task.allowAutomaticTasks: "off". This closes the open-folder autorun attack vector. - Audit MCP servers before installing them. Only install MCP servers from sources you trust. Review their code if possible. The MCPoison attack relies on developers installing malicious MCP configurations.
- Scan after every major feature you build with Cursor. Each prompt-generated feature is an independent security decision that may contradict what Cursor built in a previous session. Run a scan after adding authentication, database access, payment processing, or API integrations.
- Review secrets handling in every PR. Search your codebase for
NEXT_PUBLIC_prefixed variables and verify each one is safe to expose. The anon key and publishable Stripe key are fine. Anything else is probably not. - Never trust RLS generated by AI without testing it. Open your browser console and test queries with the anon key. If you can see other users' data, your RLS is broken regardless of what Cursor told you it configured.
What Makes Scanning Cursor Apps Different
Traditional security scanners were built to test human-written enterprise applications. They check headers, SSL configuration, and known CVE patterns. These checks are necessary but insufficient for AI-generated code.
Cursor apps need a scanner that understands AI-specific vulnerability patterns: secret exposure in JavaScript bundles, RLS misconfigurations, unprotected route patterns, and missing server-side validation. VibeArmor runs 120 checks organized into a hackability-first tier system specifically designed for apps built with Cursor, Lovable, Bolt, and v0.
Our scanner has solved 104 out of 104 XBOW benchmark scenarios, covering every exploit type that affects AI-generated code: XSS with 15 filter bypass variants, SQL injection with WAF evasion, SSRF chains, command injection, authentication bypasses, file upload exploits, and SSTI. When we say your Cursor app has a vulnerability, we have proved it can be exploited.
Frequently Asked Questions
Is Cursor less secure than VS Code?
Cursor inherits VS Code's architecture, so they share the same base security model. The additional risk comes from two sources: Cursor's AI agent features create new attack surface (MCP poisoning, prompt injection), and Cursor ships with Workspace Trust disabled by default. With proper settings and current versions, the tool itself is reasonably secure. The code it generates is the larger concern.
Should I stop using Cursor because of these vulnerabilities?
No. Cursor is the most productive AI coding tool available. The correct response is to use it with security practices: update regularly, enable Workspace Trust, audit MCP servers, and scan the code it produces. Stopping Cursor would be like stopping use of a power saw because it can cut you. Use safety equipment instead.
How do I know if my Cursor-built app is vulnerable right now?
Run a free scan. In 3 minutes you will know your hackability grade, see every finding prioritized by severity, and get fix prompts you can paste directly into Cursor to remediate the issues. Most Cursor apps score D or F on their first scan. That is fixable in an afternoon — see our step-by-step guide.
Are other AI code editors (Windsurf, GitHub Copilot) safer?
The code generation vulnerability patterns are remarkably consistent across all AI coding tools. Copilot, Windsurf, and Cursor all produce the same 7 vulnerability types. The tool-level CVEs are specific to each editor, but the generated code risks are universal. Test the output, regardless of which tool generated it.
Related reading
- The 7 Most Common Vulnerabilities in AI-Generated Code
45-62% of AI-generated code contains security flaws. These are the 7 specific vulnerabilities we find most often in apps...
- Vibe Coding Security Checklist: 15 Things to Check Before You Ship
A prioritized checklist of security issues we find in 70%+ of AI-built apps. Organized by severity so you fix what matte...
- How to Secure Your AI-Built App: A Step-by-Step Guide for Non-Security Engineers
You built an app with Cursor, Lovable, or Bolt in a weekend. Now you need to secure it before real users sign up. Here i...
- Scan Your Cursor-Built App Free
- Benchmark Scores — 104/104 XBOW scenarios solved
Scan your app free
Paste a URL, get a letter grade and Cursor-ready fixes in 3 minutes. No signup required.
Start Free Scan