ByteTools Logo

Secure Vibe Coding

Ship fast without getting hacked

The practical security guide for developers using AI to write code

The Problem With Vibe Coding

AI coding tools — ChatGPT, Claude, Cursor, Copilot — are incredible for shipping fast. But they have a critical blind spot: they optimize for code that works, not code that is secure.

LLMs are trained on millions of Stack Overflow answers, tutorials, and blog posts — most of which were written to demonstrate concepts, not harden production systems. The result: AI routinely generates code with SQL injection vulnerabilities, hardcoded secrets, broken authentication, and missing authorization checks — all of which will pass your tests and "work" until they are exploited.

This guide covers the 10 most common vulnerability patterns AI produces, with before/after examples and the rules you need to catch them before they ship.

1. Secrets & Environment Variables

Most common AI mistake · Highest blast radius

When you ask AI to write code that calls an API, it almost always puts the key directly in the code as an example. It is easy to accept that code, test it, see it working, and commit it — with the key still in place. Automated bots scan GitHub continuously and will find and abuse exposed keys within minutes.

What AI gives you

const openai = new OpenAI({
  apiKey: "sk-proj-abc123xyz..."
});

const stripe = new Stripe(
  "sk_live_51Abc123..."
);

What you should ship

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

const stripe = new Stripe(
  process.env.STRIPE_SECRET_KEY!
);

Rule: If a string in your code starts with sk-, pk_, AKIA, or looks like a password — it should never be in source code. Use .env and add it to .gitignore before your first commit. Enable GitHub secret scanning on every repo.

2. SQL Injection — Parameterized Queries

Decades old · Still generated by AI daily

SQL injection is one of the oldest vulnerabilities in web development. It is also one of the most commonly generated by AI. The pattern is simple: AI builds queries by concatenating strings with user-provided values. An attacker can terminate your query early and inject their own SQL, reading or deleting your entire database.

Vulnerable — AI output

// User inputs: ' OR '1'='1
const query = `SELECT * FROM users
  WHERE email = '${email}'
  AND password = '${password}'`;

db.query(query); // Full DB exposed

Safe — parameterized

// Values passed separately, never
// interpreted as SQL
const query = `SELECT * FROM users
  WHERE email = $1
  AND password = $2`;

db.query(query, [email, password]);

Rule: Never build SQL strings using `${variable}` or + variable +. Always use parameterized queries ($1, $2 in PostgreSQL / ? in MySQL) or an ORM like Prisma or Drizzle that handles this for you automatically.

3. Authentication Done Right

Three common AI auth mistakes in one section

3a. Password Storage

AI frequently generates password storage using MD5, SHA-1, or even plain text. These are not acceptable for passwords. Use bcrypt (or Argon2) — algorithms specifically designed to be slow, making brute-force attacks computationally expensive.

Wrong

import crypto from "crypto";

// MD5 — crackable in seconds
const hash = crypto
  .createHash("md5")
  .update(password)
  .digest("hex");

Correct

import bcrypt from "bcrypt";

// bcrypt with cost factor 12
const hash = await bcrypt.hash(
  password, 12
);
const valid = await bcrypt.compare(
  input, hash
);

3b. JWT Storage

AI almost always stores JWTs in localStorage. This makes them accessible to any JavaScript on the page — meaning an XSS vulnerability (covered next) immediately compromises every user session. Store tokens in httpOnly cookies instead.

Vulnerable

// XSS can steal this
localStorage.setItem(
  "token", jwt
);

Safe

// JS cannot read httpOnly cookies
res.cookie("token", jwt, {
  httpOnly: true,
  secure: true,
  sameSite: "strict",
  maxAge: 3600000
});

3c. Rate Limiting on Auth Endpoints

AI-generated login endpoints have no rate limiting. An attacker can try thousands of passwords per second. Add rate limiting to any endpoint that checks credentials.

import rateLimit from "express-rate-limit";

const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 10,                   // 10 attempts per window
  message: "Too many login attempts, try again later"
});

app.post("/login", loginLimiter, loginHandler);

Rule: Any endpoint that handles passwords, tokens, or sensitive actions needs rate limiting. Never store tokens in localStorage. Passwords must be hashed with bcrypt or Argon2 — never MD5, SHA-1, or plain text.

4. Input Validation — Server-Side, Always

Frontend validation is UX · Backend validation is security

AI often generates validation only on the frontend — a React form that checks if an email looks valid before submitting. That is useful for user experience. It provides zero security. Any attacker uses curl or Postman to call your API directly, skipping your frontend entirely. All validation must be repeated on the server.

Frontend-only (not secure)

// React — bypassed with curl
if (!email.includes("@")) {
  setError("Invalid email");
  return;
}
// API has no server-side check

Server-side with Zod

import { z } from "zod";

const schema = z.object({
  email: z.string().email(),
  age: z.number().min(18).max(120),
  name: z.string().max(100)
});

const data = schema.parse(req.body);
// Throws if invalid — before any logic

Rule: Treat every API request as if it came from an attacker with no frontend. Validate and sanitize all inputs on the server before doing anything with them. Zod (TypeScript), Joi (Node.js), Pydantic (Python), and FluentValidation (.NET) all make this easy.

5. Authorization vs Authentication (IDOR)

The most missed vulnerability in vibe-coded apps

Authentication answers "who are you?" — are you logged in? Authorization answers "are you allowed to do this?" — should you be able to accessthis specific record?

AI almost always implements authentication but forgets authorization. The result is called an Insecure Direct Object Reference (IDOR) — any logged-in user can read or modify any other user's data just by changing an ID in the URL.

Vulnerable — no ownership check

// GET /api/orders/1234
// Any logged-in user can get
// any order by changing the ID
app.get("/api/orders/:id",
  requireAuth,
  async (req, res) => {
    const order = await db.orders
      .findById(req.params.id);
    res.json(order); // ← no check!
  }
);

Safe — ownership enforced

app.get("/api/orders/:id",
  requireAuth,
  async (req, res) => {
    const order = await db.orders
      .findById(req.params.id);

    // Always verify ownership
    if (order.userId !== req.user.id) {
      return res.status(403).json(
        { error: "Forbidden" }
      );
    }
    res.json(order);
  }
);

Rule: Every time you fetch a record by ID from an API, ask: "could the logged-in user also fetch someone else's record with a different ID?" If yes, add an ownership check. This applies to orders, profiles, files, messages, invoices — anything user-owned.

6. XSS — Safe Output in the Browser

Lets attackers run their JavaScript in your users' browsers

Cross-Site Scripting (XSS) happens when user-controlled content is rendered as HTML without escaping. An attacker stores a malicious script in your database (via a form field), and when another user views it, the script runs in their browser — stealing cookies, sessions, and data.

AI commonly generates innerHTML assignments and React's dangerouslySetInnerHTML without sanitization.

Vulnerable

// Vanilla JS
div.innerHTML = userContent;

// React
<div dangerouslySetInnerHTML={{
  __html: userContent
}} />

Safe

// Vanilla JS — textContent is safe
div.textContent = userContent;

// React — JSX escapes by default
<div>{userContent}</div>

// If you MUST render HTML, sanitize
import DOMPurify from "dompurify";
<div dangerouslySetInnerHTML={{
  __html: DOMPurify.sanitize(html)
}} />

Rule: Never put user-controlled data into innerHTML or dangerouslySetInnerHTML without running it through DOMPurify first. React's JSX curly braces {value} are safe — they escape automatically. Also add a Content-Security-Policy header as a second line of defense.

7. API Security — CORS & Data Exposure

Two mistakes that quietly expose your data

7a. Wildcard CORS

AI-generated CORS configuration almost universally uses origin: "*" — allowing any website on the internet to make authenticated requests to your API on behalf of your users.

Dangerous

app.use(cors({ origin: "*" }));
// Any site can call your API

Correct

app.use(cors({
  origin: [
    "https://yourdomain.com",
    "https://app.yourdomain.com"
  ],
  credentials: true
}));

7b. Over-Returning Data

AI tends to return entire database rows from API endpoints. If your users table has a password_hash, stripe_customer_id, or internal_notes column, those fields will be exposed to the client. Always select only the fields you intend to return.

Leaks sensitive fields

const user = await db.users
  .findById(id);
res.json(user);
// Returns password_hash, tokens...

Only return what's needed

const user = await db.users.findById(
  id,
  { select: { id, name, email,
               avatarUrl, createdAt } }
);
res.json(user);

Rule: Whitelist CORS origins explicitly. Never use * for authenticated APIs. Explicitly select only the fields your API needs to return — never send a full database row.

8. File Upload Security

Missing validation can lead to remote code execution

AI-generated file upload handlers typically accept any file type, trust the filename provided by the client, and store files in a publicly accessible directory. An attacker can upload a PHP script, a reverse shell, or an oversized file that crashes your server.

Secure file upload checklist

import { createHash, randomUUID } from "crypto";
import path from "path";

const ALLOWED_TYPES = ["image/jpeg", "image/png", "image/webp"];
const MAX_SIZE = 5 * 1024 * 1024; // 5MB

async function handleUpload(file: File) {
  // 1. Check MIME type (not just extension)
  if (!ALLOWED_TYPES.includes(file.type)) {
    throw new Error("File type not allowed");
  }

  // 2. Enforce size limit
  if (file.size > MAX_SIZE) {
    throw new Error("File too large");
  }

  // 3. Generate a random filename — never trust client-provided name
  const ext = file.type.split("/")[1];
  const safeName = `${randomUUID()}.${ext}`;

  // 4. Store outside the web root or in object storage (S3/R2)
  // Never store uploads in /public where they can be executed
  await storageClient.put(`uploads/${safeName}`, file);
}

Rule: Validate MIME type, enforce a size limit, generate a random server-side filename, and store uploads in object storage (S3, R2, Supabase Storage) — never in your public web directory.

9. Dependency Hygiene

AI recommends packages as of its training cutoff — not today

LLMs have a training cutoff date. When AI recommends a package version, it may be recommending a version that has known vulnerabilities discovered after that cutoff. You may also inherit transitive vulnerabilities from packages your packages depend on.

# Run this before every deployment
npm audit

# Fix automatically where safe
npm audit fix

# See what's outdated
npm outdated

# Check a specific package's CVE history
# https://security.snyk.io/package/npm/<package-name>

Rule: Run npm audit (or pip-audit / cargo audit) before shipping. Enable Dependabot on GitHub for automated vulnerability alerts. Commit your lock file (package-lock.json) — it pins exact versions so production matches what you tested.

10. How to Prompt AI for Secure Code

Better prompts = fewer vulnerabilities out of the gate

AI does not produce secure code by default — but it will if you ask. Adding security context to your prompts dramatically reduces the vulnerabilities in the output. These patterns consistently produce better results:

Be explicit about security requirements

"Write a login endpoint in Express. Use bcrypt for password comparison,
rate limit to 10 attempts per 15 minutes, store the JWT in an httpOnly
cookie, and validate all inputs with Zod before processing."

Ask for a security review

"Review this code for security vulnerabilities. Check specifically for
SQL injection, missing authorization checks, XSS, hardcoded secrets,
and missing input validation. Show me the fixed version."
[paste your code]

Ask it to think adversarially

"You are a security engineer doing a code review. What are three ways
an attacker could exploit this endpoint? Then fix all three."
[paste your code]

Specify the security library upfront

"Write this using Zod for validation, parameterized queries with
Prisma, and no hardcoded values — all secrets from process.env"

Pre-Ship Security Checklist

Before pushing to production, run through this list. Each item maps to a vulnerability category above.

Security is a habit, not a pass/fail

You do not need to be a security expert to ship secure software. The vulnerabilities above cover the vast majority of real-world breaches in developer-built applications. Build the habit of running the checklist before every deployment, asking AI to review code for vulnerabilities, and treating user input as untrusted by default — and you will be ahead of most production applications in the wild.

Related Tools & Resources

ByteTools

Security Scanning

  • • GitHub Secret Scanning
  • • TruffleHog (secret detection)
  • • Snyk (dependency CVEs)
  • • OWASP ZAP (API scanning)

Further Reading

  • • OWASP Top 10 (2021)
  • • PortSwigger Web Security Academy
  • • Troy Hunt — Hack Yourself First
  • • Have I Been Pwned API

Frequently Asked Questions

What is vibe coding and why is it a security risk?

Vibe coding means using AI assistants like Cursor, GitHub Copilot, or ChatGPT to generate large amounts of code quickly without fully reviewing each line. The security risk is that AI models produce statistically common code patterns, which often include classic vulnerabilities: SQL injection via string concatenation, hardcoded API keys, missing authentication checks, and no input validation. Shipping unreviewed AI-generated code to production is the fastest way to introduce exploitable bugs.

What are the most common security vulnerabilities in AI-generated code?

The most common vulnerabilities in AI-generated code are: SQL injection (building queries with string concatenation instead of parameterized queries), XSS (inserting user content directly into HTML), hardcoded secrets (API keys or passwords written directly in code), broken authentication (missing token verification or role checks), IDOR (accessing resources by ID without ownership checks), and insecure file uploads (no validation of file type or size).

How do I prevent secrets and API keys from being exposed in AI-generated code?

Never hardcode secrets in source files. Use environment variables (.env files for local dev, platform secrets for production). For frontend code, only expose variables prefixed with your framework's public prefix (NEXT_PUBLIC_ in Next.js, VITE_ in Vite) — everything else stays server-side only. Audit your git history with tools like TruffleHog or git-secrets before pushing to public repos.

Should I review all code that AI tools like Cursor or GitHub Copilot generate?

Yes, always review AI-generated code before merging, especially for: database queries (look for string concatenation), authentication logic (verify every protected route checks tokens), user input handling (confirm validation and sanitization), and file/URL operations. A fast review with a security checklist takes minutes and catches the majority of AI-introduced vulnerabilities.

What is the most important security check before shipping AI-generated code?

The single most impactful check is scanning for hardcoded secrets (API keys, database passwords, tokens). These are immediately exploitable the moment code is pushed to a public repository. After that, verify that all database queries use parameterized statements, all user-facing endpoints require authentication, and user input is validated server-side — not just in the browser.