Quick Comparison
| helmet | cors | express-rate-limit | |
|---|---|---|---|
| Weekly downloads | ~3M | ~15M | ~3M |
| Attack vectors blocked | XSS, clickjacking, MIME sniffing, MITM | Cross-origin request forgery, data leakage | Brute force, DoS, credential stuffing |
| Required for all APIs? | Yes | Usually yes | Yes |
| Config complexity | Low | Medium | Low–Medium |
| Distributed support | N/A | N/A | Yes (Redis store) |
| TypeScript | Yes | Yes | Yes |
TL;DR
These three packages are not alternatives — they solve different security problems and you should use all three together. helmet sets HTTP security headers (CSP, HSTS, X-Frame-Options, etc.) that protect against XSS, clickjacking, and MIME sniffing attacks. cors configures Cross-Origin Resource Sharing — controls which domains can call your API from a browser. express-rate-limit limits how many requests a client can make — protects against brute force attacks, DoS, and API abuse. In 2026: install all three as standard security baseline for any Express API.
Key Takeaways
- helmet: ~3M weekly downloads — sets 15+ HTTP security headers, one-liner protection
- cors: ~15M weekly downloads — CORS preflight + actual request header management
- express-rate-limit: ~3M weekly downloads — IP-based rate limiting with flexible stores
- These solve different attack vectors: headers vs cross-origin vs request volume
- helmet is a near-zero-config must-have:
app.use(helmet()) - CORS misconfigurations are a top API security issue — be explicit about allowed origins
Why These Three Are the Security Baseline
helmet, cors, and express-rate-limit have become the de facto security baseline for Express APIs — they're included in Express generator templates, recommended in OWASP's Node.js security guidelines, and referenced in virtually every Express production-readiness checklist. What makes this particular combination effective is that the three packages are complementary without overlap: each addresses a different attack vector at a different layer of the HTTP stack.
helmet works at the response header level, telling browsers how to treat your content. These headers are instructions to the browser: don't execute inline scripts, don't load this page in an iframe, only connect over HTTPS. They're passive defenses that limit what an attacker can do if they manage to inject content into your responses. cors works at the cross-origin request level, controlling which websites are allowed to call your API from JavaScript running in a browser. Without explicit CORS configuration, browsers apply a restrictive same-origin policy by default — cors relaxes that policy in a controlled, explicit way rather than opening it arbitrarily. express-rate-limit works at the request volume level, throttling how frequently any single IP address can call your endpoints regardless of what headers or credentials they present.
Skipping any one of them leaves a specific exploitable gap. A server without helmet is vulnerable to XSS attacks that inject scripts into pages, clickjacking attacks that embed your app in a malicious iframe, and MIME sniffing attacks that trick browsers into executing non-script files as scripts. A server without an explicit CORS configuration relies on the browser's default policy, which may be overly permissive depending on your Express version and configuration. A server without rate limiting is open to credential stuffing (trying thousands of password combinations against login endpoints), brute force attacks, and API abuse by bots that hammer endpoints. The three packages together close three distinct attack surfaces with minimal configuration overhead.
The Minimal Secure Express App
import express from "express"
import helmet from "helmet"
import cors from "cors"
import rateLimit from "express-rate-limit"
const app = express()
// 1. Security headers — block XSS, clickjacking, MIME sniffing:
app.use(helmet())
// 2. CORS — only allow your frontend domain:
app.use(cors({
origin: process.env.FRONTEND_URL ?? "https://www.pkgpulse.com",
methods: ["GET", "POST", "PUT", "DELETE"],
credentials: true,
}))
// 3. Rate limiting — 100 requests per 15 minutes per IP:
app.use(rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true,
legacyHeaders: false,
}))
app.get("/api/packages", (req, res) => {
res.json({ packages: [] })
})
helmet
helmet — HTTP security headers:
What headers helmet sets
import helmet from "helmet"
// Default helmet() enables all headers below:
app.use(helmet())
// Individual headers (for fine-grained control):
// Content-Security-Policy — limits sources for scripts, styles, images:
app.use(helmet.contentSecurityPolicy({
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'"], // Allow inline scripts (common for CRAs)
styleSrc: ["'self'", "https://fonts.googleapis.com"],
imgSrc: ["'self'", "data:", "https://cdn.pkgpulse.com"],
connectSrc: ["'self'", "https://api.pkgpulse.com"],
fontSrc: ["'self'", "https://fonts.gstatic.com"],
objectSrc: ["'none'"],
upgradeInsecureRequests: [], // Auto-upgrade HTTP to HTTPS
},
}))
// HTTP Strict Transport Security — force HTTPS:
app.use(helmet.hsts({
maxAge: 31536000, // 1 year in seconds
includeSubDomains: true,
preload: true,
}))
// X-Frame-Options — prevent clickjacking:
app.use(helmet.frameguard({ action: "deny" }))
// X-Content-Type-Options — prevent MIME sniffing:
app.use(helmet.noSniff())
// Referrer-Policy:
app.use(helmet.referrerPolicy({ policy: "strict-origin-when-cross-origin" }))
// Permissions-Policy (formerly Feature-Policy):
app.use(helmet.permittedCrossDomainPolicies())
// X-Powered-By is removed by helmet (hides "Express"):
// No more: X-Powered-By: Express
Customize for Next.js or SPAs
// If serving a React SPA with inline scripts/styles, CSP needs adjustment:
app.use(
helmet({
contentSecurityPolicy: {
directives: {
...helmet.contentSecurityPolicy.getDefaultDirectives(),
"script-src": ["'self'", "'unsafe-inline'", "'unsafe-eval'"], // CRA requires unsafe-eval
"style-src": ["'self'", "'unsafe-inline'"],
"img-src": ["'self'", "data:", "blob:", "https:"],
},
},
// Disable cross-origin embedder policy if loading cross-origin resources:
crossOriginEmbedderPolicy: false,
})
)
CSP nonce-based approach
For server-rendered apps where 'unsafe-inline' is too permissive, a nonce-based Content Security Policy is the right answer. You generate a cryptographically random nonce per request and include it in both the CSP header and every inline script tag. Only scripts bearing the correct nonce execute — injected scripts from XSS attacks don't have access to the nonce and are blocked.
import crypto from "crypto"
app.use((req, res, next) => {
// Generate a fresh nonce for every response:
res.locals.cspNonce = crypto.randomBytes(16).toString("base64")
next()
})
app.use((req, res, next) => {
helmet.contentSecurityPolicy({
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", `'nonce-${res.locals.cspNonce}'`],
styleSrc: ["'self'", `'nonce-${res.locals.cspNonce}'`],
},
})(req, res, next)
})
For Next.js, the same pattern applies via middleware — generate a nonce in middleware.ts, set it in a response header, and read it back in next.config.js to inject into the CSP. This is the approach documented by the Next.js team and is the only way to use a strict CSP without 'unsafe-inline' in a Next.js app.
What happens if you don't use helmet
Without helmet, every Express response ships without the security headers browsers look for. The X-Powered-By: Express header is included by default, advertising your server technology to attackers and making targeted exploitation slightly easier. Pages can be embedded in iframes by third-party sites, enabling clickjacking attacks where an attacker overlays your UI with an invisible malicious layer. Inline script injection from XSS vulnerabilities can execute unconstrained because there's no CSP to block it. None of these risks are theoretical — they're actively exploited in the wild. The cost of adding app.use(helmet()) is one line and zero milliseconds of meaningful latency. The cost of not adding it is a permanently larger attack surface.
What does helmet protect against?
XSS (Cross-Site Scripting):
→ Content-Security-Policy restricts which scripts can execute
→ If attacker injects <script>..., CSP blocks it from running
Clickjacking:
→ X-Frame-Options: DENY prevents your page from being embedded in an iframe
→ Attacker can't overlay an invisible iframe over legitimate UI
MIME sniffing:
→ X-Content-Type-Options: nosniff prevents browsers from guessing file types
→ Attacker can't trick browser into executing a text file as JavaScript
Man-in-the-middle:
→ HSTS forces all connections over HTTPS, even if user types http://
→ Cookies can't be intercepted over plain HTTP
cors
cors — Cross-Origin Resource Sharing:
Basic CORS setup
import cors from "cors"
// Allow all origins (dangerous — only for public APIs):
app.use(cors())
// Allow specific origin:
app.use(cors({
origin: "https://www.pkgpulse.com",
}))
// Allow multiple origins:
const allowedOrigins = [
"https://www.pkgpulse.com",
"https://app.pkgpulse.com",
...(process.env.NODE_ENV === "development" ? ["http://localhost:3000"] : []),
]
app.use(cors({
origin: (origin, callback) => {
// Allow requests with no origin (like mobile apps, Postman, curl):
if (!origin) return callback(null, true)
if (allowedOrigins.includes(origin)) {
callback(null, true)
} else {
callback(new Error(`CORS: ${origin} not allowed`))
}
},
methods: ["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"],
allowedHeaders: ["Content-Type", "Authorization", "X-Request-ID"],
credentials: true, // Allow cookies + Authorization headers
maxAge: 86400, // Cache preflight for 24 hours
}))
Dynamic origin validation with environment-specific allowlists
In practice, most APIs run across multiple environments — local development, staging, preview deployments, and production — and the set of allowed origins differs in each. A clean pattern is to drive the CORS origin allowlist from environment variables so you don't need to touch code when adding a new staging URL or a Vercel preview domain.
// Parse CORS_ORIGINS env var as a comma-separated list:
const corsOrigins = new Set(
(process.env.CORS_ORIGINS ?? "https://www.pkgpulse.com")
.split(",")
.map((o) => o.trim())
.filter(Boolean)
)
app.use(cors({
origin: (origin, callback) => {
if (!origin) return callback(null, true)
if (corsOrigins.has(origin)) return callback(null, true)
// Allow Vercel preview deployments matching your project pattern:
if (/^https:\/\/pkgpulse-[a-z0-9-]+-team\.vercel\.app$/.test(origin)) {
return callback(null, true)
}
callback(new Error(`CORS: ${origin} not in allowlist`))
},
credentials: true,
maxAge: 86400,
}))
Set CORS_ORIGINS=https://www.pkgpulse.com,https://app.pkgpulse.com in production, and CORS_ORIGINS=http://localhost:3000,http://localhost:3001 in development. The Vercel preview pattern handles branch deployments without needing manual updates.
CORS in microservices
In a microservices architecture, CORS policy management gets more complex. If your API is composed of multiple services behind a single gateway (Nginx, AWS API Gateway, Cloudflare Workers), you typically configure CORS at the gateway level and skip it in individual services — services only receive pre-validated internal traffic. But if services are called directly from the browser (common in micro-frontend setups), each service needs its own CORS configuration.
The most maintainable approach is to extract CORS configuration into a shared package or environment variable so all services use the same allowlist. Inconsistency between service CORS configs is a common source of "it works in production but not staging" bugs, because the service that happened to have the stricter config is the one that blocks the request.
Per-route CORS
import cors from "cors"
const publicCors = cors({
origin: "*", // Any origin
methods: ["GET"],
})
const privateCors = cors({
origin: "https://app.pkgpulse.com",
credentials: true,
})
// Public API — any origin can read:
app.get("/api/packages/public", publicCors, (req, res) => {
res.json({ packages: [] })
})
// Private API — only app.pkgpulse.com with credentials:
app.post("/api/packages", privateCors, (req, res) => {
// ...
})
CORS preflight explained
CORS preflight for non-simple requests (POST with JSON, PUT, DELETE, custom headers):
Browser sends OPTIONS first:
OPTIONS /api/packages HTTP/1.1
Origin: https://www.pkgpulse.com
Access-Control-Request-Method: POST
Access-Control-Request-Headers: Content-Type, Authorization
Server responds with CORS headers:
Access-Control-Allow-Origin: https://www.pkgpulse.com
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Max-Age: 86400 ← Cache this for 24h, don't ask again
Then browser sends the actual request.
The cors package handles both the OPTIONS response AND adding headers to actual requests.
Common CORS mistakes
// ❌ MISTAKE 1: Reflecting the Origin header without validation:
app.use((req, res, next) => {
res.setHeader("Access-Control-Allow-Origin", req.headers.origin) // Allows ANY origin!
next()
})
// ❌ MISTAKE 2: Wildcard + credentials (browsers reject this):
app.use(cors({
origin: "*",
credentials: true, // Browsers reject: can't combine * with credentials
}))
// Error: The value of the 'Access-Control-Allow-Credentials' header in the response
// is '' which must be 'true' when the request's credentials mode is 'include'.
// ✅ CORRECT: Explicit origin + credentials:
app.use(cors({
origin: "https://www.pkgpulse.com",
credentials: true,
}))
Another common mistake is omitting maxAge or setting it too short. Browsers send a preflight OPTIONS request before every non-simple cross-origin call unless you tell them to cache the result. Without maxAge: 86400, every API call from the browser generates two HTTP requests — the preflight and the actual call — doubling your server's request volume and adding 100–200ms of latency on every request. Setting maxAge to 24 hours eliminates this overhead for the vast majority of calls at no cost.
express-rate-limit
express-rate-limit — IP-based request rate limiting:
Basic rate limiting
import rateLimit from "express-rate-limit"
// Global rate limit:
const globalLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minute window
max: 100, // Max 100 requests per window per IP
message: { error: "Too many requests, please try again later." },
standardHeaders: true, // RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset headers
legacyHeaders: false, // Disable X-RateLimit-* headers
})
app.use(globalLimiter)
Sliding window vs fixed window
express-rate-limit defaults to a fixed window algorithm: it counts requests in discrete time windows (e.g., 0:00–0:15, 0:15–0:30). At the boundary between windows, the counter resets to zero, which means a client can send max requests at 0:14:59 and another max requests at 0:15:01 — effectively doubling the allowed rate at window boundaries.
A sliding window algorithm tracks each request's timestamp individually and counts only requests within the rolling past N minutes. This prevents the boundary-doubling problem but requires more memory (or a sorted set in Redis). The rate-limit-redis store supports sliding window mode via the sendCommand interface. For most APIs, fixed window is fine — boundary-doubling is a theoretical concern that rarely affects real traffic patterns. For auth endpoints where even a brief burst of attempts is dangerous, use a sliding window with Redis.
// Fixed window (default) — fine for general API routes:
const apiLimiter = rateLimit({
windowMs: 60 * 1000,
max: 60,
standardHeaders: true,
legacyHeaders: false,
})
// Shorter window for burst-sensitive endpoints:
const authLimiter = rateLimit({
windowMs: 5 * 60 * 1000, // 5-minute window — tighter
max: 10,
skipSuccessfulRequests: true,
})
Choosing the right windowMs
The windowMs / max pairing depends on your use case. General API routes can handle higher limits over longer windows (60 requests per minute is comfortable for most authenticated users). Auth routes need tight limits because even a short burst of attempts is dangerous. Password reset endpoints should have the tightest limits since they can be used for account enumeration. A common pattern: set the global limit generously, tighten it significantly for anything authentication-related.
app.post("/auth/login", authLimiter, loginHandler)
app.post("/auth/register", authLimiter, registerHandler)
app.post("/auth/forgot-password", rateLimit({ windowMs: 60 * 60 * 1000, max: 3 }), forgotPasswordHandler)
const apiLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 60, // 60 requests/minute
})
app.use("/api/", apiLimiter)
What happens when you scale horizontally without Redis
The default in-memory store for express-rate-limit stores counters in the Node.js process's heap. This works fine for a single-instance deployment. When you scale to multiple instances behind a load balancer, each instance maintains its own independent counter. A client hitting instance A 60 times and instance B 60 times has effectively made 120 requests while each instance thinks it's seen only 60. The rate limit becomes meaningless.
Any production deployment with more than one server instance needs the Redis store. The rate-limit-redis package bridges express-rate-limit to a Redis connection, giving all instances access to shared counters. This adds a Redis round-trip to every rate-limited request (typically 1–2ms), which is negligible compared to the cost of actually processing the request.
import rateLimit from "express-rate-limit"
import RedisStore from "rate-limit-redis"
import { createClient } from "redis"
const redisClient = createClient({ url: process.env.REDIS_URL })
await redisClient.connect()
const limiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true,
legacyHeaders: false,
store: new RedisStore({
sendCommand: (...args: string[]) => redisClient.sendCommand(args),
}),
keyGenerator: (req) => req.ip ?? "unknown",
})
app.use(limiter)
Custom response and skip logic
import rateLimit from "express-rate-limit"
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 60,
// Skip rate limiting for authenticated users with API keys:
skip: (req) => {
const apiKey = req.headers["x-api-key"]
return apiKey === process.env.INTERNAL_API_KEY
},
// Custom response when limit exceeded:
handler: (req, res) => {
res.status(429).json({
error: "Rate limit exceeded",
retryAfter: Math.ceil(req.rateLimit.resetTime / 1000),
limit: req.rateLimit.limit,
current: req.rateLimit.current,
})
},
})
Full Production Middleware Stack
import express from "express"
import helmet from "helmet"
import cors from "cors"
import rateLimit from "express-rate-limit"
import RedisStore from "rate-limit-redis"
import { createClient } from "redis"
import compression from "compression"
const app = express()
const redisClient = createClient({ url: process.env.REDIS_URL })
// Trust proxy (Nginx, Cloudflare):
app.set("trust proxy", 1)
// 1. Compression (before security headers):
app.use(compression())
// 2. Security headers:
app.use(
helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'"],
styleSrc: ["'self'"],
imgSrc: ["'self'", "data:", "https:"],
connectSrc: ["'self'"],
upgradeInsecureRequests: [],
},
},
})
)
// 3. CORS:
app.use(
cors({
origin: (origin, cb) => {
const allowed = new Set([
"https://www.pkgpulse.com",
"https://app.pkgpulse.com",
...(process.env.NODE_ENV !== "production" ? ["http://localhost:3000"] : []),
])
cb(null, !origin || allowed.has(origin))
},
credentials: true,
methods: ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allowedHeaders: ["Content-Type", "Authorization"],
maxAge: 86400,
})
)
// 4. Global rate limit:
app.use(
rateLimit({
windowMs: 15 * 60 * 1000,
max: 200,
standardHeaders: true,
legacyHeaders: false,
store: new RedisStore({ sendCommand: (...a: string[]) => redisClient.sendCommand(a) }),
})
)
// 5. Body parsing:
app.use(express.json({ limit: "10mb" }))
app.use(express.urlencoded({ extended: true, limit: "10mb" }))
// 6. Routes:
app.use("/api", apiRoutes)
Observability: Logging Rate Limit Hits
Rate limit hits are security signals, not just traffic management. A sudden spike in 429 responses from a single IP is a sign of an ongoing brute force attempt. A distributed burst from many IPs could indicate a coordinated attack or a leaked credential being tested at scale. If you aren't logging rate limit hits, you're flying blind.
The handler callback in express-rate-limit fires on every blocked request, making it the right place to emit a structured log or increment a metric:
import rateLimit from "express-rate-limit"
import { logger } from "./logger" // Your structured logger (pino, winston, etc.)
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 10,
handler: (req, res) => {
// Log the rate limit hit with context:
logger.warn({
event: "rate_limit_exceeded",
ip: req.ip,
path: req.path,
userAgent: req.headers["user-agent"],
timestamp: new Date().toISOString(),
})
res.status(429).json({
error: "Too many attempts. Please wait before trying again.",
})
},
})
Sending these events to a log aggregation platform (Datadog, Grafana Loki, AWS CloudWatch) lets you set alerts: "if rate_limit_exceeded events from any single IP exceed 50 in 5 minutes, page on-call." This turns your rate limiter from a passive defense into an active early-warning system.
Testing Your Security Middleware
Security middleware is only useful if it actually works as configured. Supertest makes it straightforward to write integration tests that verify your headers and rate limit behavior without a running server:
import request from "supertest"
import app from "./app"
describe("security headers", () => {
it("includes helmet headers", async () => {
const res = await request(app).get("/api/packages")
expect(res.headers["x-content-type-options"]).toBe("nosniff")
expect(res.headers["x-frame-options"]).toBe("DENY")
expect(res.headers["strict-transport-security"]).toMatch(/max-age=/)
expect(res.headers["x-powered-by"]).toBeUndefined()
})
it("rejects disallowed CORS origins", async () => {
const res = await request(app)
.get("/api/packages")
.set("Origin", "https://evil.com")
expect(res.headers["access-control-allow-origin"]).toBeUndefined()
})
})
describe("rate limiting", () => {
it("returns 429 after exceeding limit", async () => {
// Make requests up to the limit:
for (let i = 0; i < 10; i++) {
await request(app).post("/auth/login").send({ email: "x", password: "y" })
}
// The next request should be rate-limited:
const res = await request(app)
.post("/auth/login")
.send({ email: "x", password: "y" })
expect(res.status).toBe(429)
expect(res.headers["ratelimit-remaining"]).toBe("0")
})
})
Security Headers Audit Tools
After deploying helmet, verify that your headers are configured correctly using these free tools:
securityheaders.com — paste your domain and get a letter grade (A+ through F) based on which security headers are present and correctly configured. It checks CSP, HSTS, X-Frame-Options, Referrer-Policy, Permissions-Policy, and newer headers like Cross-Origin-Opener-Policy. This is the fastest way to find misconfigured or missing headers in production.
Mozilla Observatory — observatory.mozilla.org — broader than securityheaders.com. In addition to headers, it checks HTTPS configuration, cookie security flags, subresource integrity, and redirects. Produces a scored report with specific remediation recommendations. A good baseline target is a B+ or higher.
Running either tool after your initial helmet deployment will often surface a CSP directive that's too broad, an HSTS maxAge that's too short, or a missing Permissions-Policy header. These tools are free, run in a browser, and require no setup — there's no reason not to check your headers before going live.
Common Configuration Mistakes to Avoid
Several helmet, cors, and express-rate-limit patterns look reasonable but introduce real security or reliability problems:
Setting origin: "*" combined with credentials: true in cors doesn't just fail silently — the browser actively blocks the credentialed request and shows a confusing error in the console. Always pair credential support with an explicit origin list.
Using 'unsafe-inline' and 'unsafe-eval' in the helmet CSP script-src directive defeats most of what CSP protects against. If your SPA requires these because of how it was built, the right fix is to migrate to a nonce-based CSP (for inline scripts) or eliminate inline evaluation (for eval). The path of least resistance is to ship a broken CSP that doesn't block much — resist it.
Setting express-rate-limit's trust proxy without correctly configuring Express's app.set("trust proxy", ...) causes the rate limiter to see Cloudflare's or Nginx's IP address instead of the client's IP. You'd be rate-limiting all users collectively instead of individually. Set app.set("trust proxy", 1) for a single reverse proxy layer, and use keyGenerator to log what IP express-rate-limit is actually seeing during initial deployment.
Feature Comparison
| Feature | helmet | cors | express-rate-limit |
|---|---|---|---|
| Attack type | XSS, clickjacking, MIME | CSRF, cross-origin | Brute force, DoS |
| Config complexity | Low | Medium | Low-Medium |
| Required for all APIs? | Yes | Usually | Yes |
| Distributed support | N/A | N/A | Yes (Redis store) |
| Per-route config | Yes | Yes | Yes |
| TypeScript | Yes | Yes | Yes |
| Weekly downloads | ~3M | ~15M | ~3M |
When to Use Each
Use all three — they protect against different attack types. This is not an either-or choice.
helmet is non-negotiable: Add it to every Express app. The default helmet() takes one line and prevents a wide range of attacks with zero downside.
cors requires thought: Be explicit about your allowed origins. Never use origin: "*" for APIs that handle authentication. Always list your exact frontend domain(s).
express-rate-limit prevents abuse: At minimum, apply stricter limits to auth routes (login, register, password reset). Apply a global limit to all API routes. Use Redis store when running multiple instances.
CORS Misconfigurations: The Most Common Security Bug
CORS is the most commonly misconfigured of the three middleware packages, and the mistakes range from overly restrictive (breaking legitimate cross-origin requests) to severely permissive (allowing any website to make authenticated requests on behalf of your users).
The most dangerous misconfiguration is dynamically reflecting the Origin header without validation: setting Access-Control-Allow-Origin to whatever origin the request came from. This pattern is sometimes used by developers who want to "just make CORS work" without thinking through the implications. It defeats the entire purpose of CORS — any website can now make authenticated requests to your API using a logged-in user's credentials via their browser.
The second most common mistake is using origin: "*" combined with credentials: true. Browsers actually reject this combination and will refuse to send the credentialed request, but developers sometimes encounter this restriction and work around it in ways that introduce different vulnerabilities. The correct solution is always an explicit origin allowlist with credentials: true.
The third common mistake is setting maxAge too short (or omitting it entirely, which defaults to a very short browser-specific value). Without a long maxAge, browsers send a preflight OPTIONS request before every non-simple cross-origin call. This adds 100-200ms of latency to every API call from browsers, and generates double the request volume on your server.
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on helmet v8.x, cors v2.x, and express-rate-limit v7.x.
Compare security and middleware packages on PkgPulse →
See also: supertest vs fastify inject vs hono testing, h3 vs polka vs koa lightweight HTTP frameworks, and Express vs Koa.