Comprehensive security guide for building and deploying AI applications safely
AI applications introduce unique security challenges that traditional software doesn't face. A single compromised API key can result in:
In January 2024, a developer accidentally committed an OpenAI API key to a public GitHub repository. Within 4 hours, the key was discovered by automated scrapers and used to generate $87,000 in API charges. This scenario is completely preventable.
# β NEVER DO THIS
const apiKey = "sk-proj-abc123..."; // Hardcoded = security breach
# β
DO THIS - Environment Variables
const apiKey = process.env.OPENAI_API_KEY;
# β
BEST - Secrets Manager (Production)
import { SecretsManager } from '@aws-sdk/client-secrets-manager';
const secret = await secretsManager.getSecretValue({
SecretId: 'prod/openai/api-key'
});Prompt injection is to LLMs what SQL injection is to databases: a critical vulnerability that lets attackers manipulate system behavior through malicious input. Unlike SQL injection, there's no perfect defenseβonly layered mitigation strategies.
System Prompt:
Attacker Input:
Vulnerable Response:
Strip dangerous patterns before they reach the model:
Use XML-style delimiters to separate instructions from user input:
Scan model responses for leaked sensitive information:
Never give LLMs direct database access or system execution capabilities. Use function calling with strict parameter validation and approval workflows for sensitive operations.
Never expose LLM APIs directly to clients. Route through a backend gateway with:
Assume all components are potentially compromised:
If you're fine-tuning models on user-generated data, implement safeguards against poisoning attacks where malicious actors inject harmful training examples.
Despite best efforts, security incidents will occur. Prepare a response plan before you need it.