Cloudflare Workers vs Lambda@Edge vs Deno Deploy 2026
Cloudflare Workers runs your code in 300+ locations with zero cold starts — V8 isolates start in under 1ms. Lambda@Edge runs in ~20 CloudFront edge locations with 100-1000ms cold starts for container-based functions. Deno Deploy runs in 35+ regions with Deno 2's improved npm compatibility. Edge computing in 2026 is past the hype phase — these are production platforms with real tradeoffs.
TL;DR
Cloudflare Workers for the fastest edge execution globally — zero cold starts, 300+ PoPs, excellent developer tooling (Wrangler), and a full platform ecosystem (KV, D1, R2, Durable Objects). Lambda@Edge when you're deeply invested in AWS and need to manipulate CloudFront distributions — not truly edge in the compute sense. Deno Deploy for Deno-first development with excellent TypeScript tooling, sub-second deployments, and Web Standards compliance. For most new edge computing projects, Cloudflare Workers is the default choice.
Key Takeaways
- Cloudflare Workers: 300+ global PoPs, zero cold starts (<1ms), V8 isolates, 50ms CPU limit (free)
- Lambda@Edge: ~20 CloudFront locations, 100-1000ms cold starts, Node.js or Python, 5 second limit
- Deno Deploy: 35+ regions, near-zero cold starts (Deno V8 isolates), Deno 2 npm compatibility
- Workers free tier: 100K requests/day, 10ms CPU time per request
- Lambda@Edge pricing: $0.60/million requests + execution duration
- Deno Deploy free tier: 100K requests/day, 50ms CPU
- All three: Web Standards (Fetch, Request, Response, URL) as the API
The Edge Computing Landscape
"Edge" means different things:
CDN edge (Lambda@Edge): Code runs on CDN nodes, but often still with containers and cold start overhead. Good for cache manipulation and simple header transformations.
True edge compute (Cloudflare Workers, Deno Deploy): V8 isolates that spin up in microseconds, no container overhead, run at hundreds of locations globally.
The V8 isolate model is the key architectural difference. Isolates share a V8 engine instance, so spinning up a new execution context costs less than 1ms — compared to containers which need seconds.
Cloudflare Workers
Runtime: V8 isolates (Workers runtime) Locations: 300+ global Points of Presence Free tier: 100K requests/day, 10ms CPU, 128 MB memory Paid: $5/month + $0.50 per million requests
What Workers Can Do
// Basic Worker — runs at 300+ edge locations
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === '/api/users') {
const users = await env.DB.prepare(
'SELECT * FROM users LIMIT 100'
).all();
return Response.json(users.results);
}
return new Response('Not found', { status: 404 });
},
};
Hono on Workers
// index.ts — Hono is the standard framework for Workers
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { logger } from 'hono/logger';
const app = new Hono<{ Bindings: Env }>();
app.use('*', cors());
app.use('*', logger());
app.get('/api/users/:id', async (c) => {
const id = c.req.param('id');
const user = await c.env.DB
.prepare('SELECT * FROM users WHERE id = ?')
.bind(id)
.first();
if (!user) return c.json({ error: 'Not found' }, 404);
return c.json(user);
});
app.post('/api/users', async (c) => {
const body = await c.req.json();
const { meta } = await c.env.DB
.prepare('INSERT INTO users (name, email) VALUES (?, ?)')
.bind(body.name, body.email)
.run();
return c.json({ id: meta.last_row_id }, 201);
});
export default app;
Workers Platform Ecosystem
// env.d.ts — Cloudflare platform bindings
interface Env {
// D1: SQLite database at the edge
DB: D1Database;
// KV: Key-value store at the edge
CACHE: KVNamespace;
// R2: S3-compatible object storage
BUCKET: R2Bucket;
// Durable Objects: stateful at the edge
COUNTER: DurableObjectNamespace;
// AI: Run inference at the edge
AI: Ai;
}
Workers isn't just a runtime — it's a platform:
- D1: SQLite database running at the edge (global replication)
- KV: Global key-value storage with millisecond reads
- R2: S3-compatible blob storage with no egress fees
- Durable Objects: Stateful coordination at the edge
- AI: Run models (LLaMA, Whisper, SDXL) at the edge without external API calls
- Queues: Message queuing between Workers
- Cron Triggers: Scheduled Workers execution
Wrangler: Local Development
# Wrangler accurately emulates the entire platform locally
npm install -D wrangler
npx wrangler dev # Local server with D1, KV, R2 emulation
# Deploy to production
npx wrangler deploy
# Tail real-time logs
npx wrangler tail
Workers Limits
| Limit | Free | Paid |
|---|---|---|
| CPU time per request | 10ms | 50ms (default), 30s (paid+) |
| Memory | 128 MB | 128 MB |
| Request size | 100 MB | 500 MB |
| Requests per day | 100K | Unlimited ($0.50/M) |
| Script size | 1 MB | 10 MB |
Workers Limitations
- 50ms CPU limit (not wall time — I/O doesn't count)
- No persistent local filesystem
- No native Node.js APIs (must use Web Standards equivalents)
- Workers can't run long computational tasks
Lambda@Edge
Runtime: Node.js 20, Python 3.12 (AWS Lambda) Locations: ~20 CloudFront edge locations Pricing: $0.60/million requests + $0.00005001/GB-second
Architecture
Lambda@Edge attaches Lambda functions to CloudFront distributions at four event types:
// Viewer Request: Runs on every request before cache check
// Origin Request: Runs on cache miss, before hitting origin
// Origin Response: Runs after origin response, before caching
// Viewer Response: Runs on every response to the viewer
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
// Add authentication header
if (!headers.authorization) {
return {
status: '403',
statusDescription: 'Forbidden',
body: 'Unauthorized',
};
}
return request; // Pass through
};
What Lambda@Edge Is Actually Good For
// Rewriting URLs for A/B testing
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
if (Math.random() < 0.5) {
request.uri = '/variant-b' + request.uri;
}
return request;
};
// Adding security headers
exports.handler = async (event) => {
const response = event.Records[0].cf.response;
response.headers['strict-transport-security'] = [{
key: 'Strict-Transport-Security',
value: 'max-age=31536000; includeSubdomains; preload',
}];
response.headers['x-content-type-options'] = [{
key: 'X-Content-Type-Options',
value: 'nosniff',
}];
return response;
};
Lambda@Edge Limitations
- ~20 locations (not truly global edge — more like "regional edge")
- Cold starts: Container-based, 100ms-1s cold start
- Complexity: Must deploy through AWS CloudFormation or SAM
- No persistent state: Each Lambda is stateless
- Region restriction: Functions must be in us-east-1 and replicated
- Node.js only at the edge: Python/other runtimes not available in all event types
Lambda@Edge is best for CloudFront-specific manipulation — not for building full APIs.
Deno Deploy
Runtime: Deno 2 (V8 isolates) Locations: 35+ global regions Free tier: 100K requests/day, 50ms CPU Paid: $20/month (Pro) Creator: Deno Land Inc.
What Sets Deno Deploy Apart
Deno 2 brings improved npm compatibility — the major historical limitation of Deno. Most npm packages now work:
// Deno Deploy — TypeScript is native, no config
import { Hono } from "npm:hono"; // npm packages work
import { serve } from "https://deno.land/std/http/server.ts"; // Or Deno URLs
const app = new Hono();
app.get("/", (c) => c.text("Hello from Deno Deploy!"));
app.get("/api/users", async (c) => {
// Fetch from external API (Web Standards)
const response = await fetch("https://jsonplaceholder.typicode.com/users");
const users = await response.json();
return c.json(users);
});
Deno.serve(app.fetch);
Deployment Speed
Deno Deploy's headline feature: sub-second deployments.
# Install Deno Deploy CLI
npm install -g deployctl
# Deploy (< 1 second to propagate globally)
deployctl deploy --project=my-project main.ts
# Deployed in 0.3 seconds
# Compare to other platforms:
# Cloudflare Workers: ~5-30s
# Vercel Edge: ~30-60s
# Lambda@Edge: ~90-300s
Deno 2: npm Compatibility
// Deno 2 npm compatibility
import express from "npm:express"; // Works!
import { z } from "npm:zod"; // Works!
import { PrismaClient } from "npm:@prisma/client"; // Works (with caveats)
// Standard library (Deno URLs)
import { delay } from "jsr:@std/async/delay";
// Or use package.json + import map:
import { Hono } from "hono"; // Node.js-style import
Limitations
- Smaller ecosystem than Cloudflare Workers (fewer built-in storage options)
- Deno compatibility issues with some complex npm packages
- Less community adoption and fewer tutorials than Workers
- No equivalent to Cloudflare's D1, R2, KV as first-party products
Platform Comparison
| Feature | Cloudflare Workers | Lambda@Edge | Deno Deploy |
|---|---|---|---|
| Global PoPs | 300+ | ~20 | 35+ |
| Cold starts | Zero (<1ms) | 100-1000ms | Near-zero |
| Free requests/day | 100K | ~1M (varies) | 100K |
| CPU limit | 50ms paid, 10ms free | 5 seconds | 50ms |
| Built-in storage | D1, KV, R2, DO | S3, DynamoDB (manual) | Deno KV |
| Framework | Hono (standard) | Node.js frameworks | Hono, Fresh |
| TypeScript | Via wrangler/esbuild | Via Lambda layers | Native |
| npm compat | ~95% | Full (Node.js) | ~90% (Deno 2) |
| Deployment speed | ~10-30s | ~90-300s | <1s |
Choosing Your Edge Platform
Choose Cloudflare Workers if:
- Maximum global coverage (300+ locations) matters
- You need the full platform: D1, KV, R2, Durable Objects, AI
- Zero cold starts are required
- You're building production APIs, not just CDN manipulation
Choose Lambda@Edge if:
- You're heavily invested in AWS and using CloudFront
- You need CloudFront-specific manipulation (URL rewrites, header injection)
- You want full Node.js compatibility at the CDN layer
- Your use case is cache control and CDN manipulation, not full applications
Choose Deno Deploy if:
- You're using Deno for development
- Sub-second global deployments are important (CI/CD)
- TypeScript-native development without build steps is preferred
- You want Web Standards compliance with good npm compatibility
Pricing at Scale and Cost Modeling
Edge computing pricing has three components: request count, CPU execution time, and egress bandwidth. Cloudflare Workers' pricing is dominated by request count ($0.50 per million requests on the paid plan) with CPU time playing a secondary role for the vast majority of workloads — the 50ms CPU limit per request means most handlers complete well within the included CPU budget. For a service processing 50 million requests per month, the Workers cost is approximately $25/month plus D1, KV, and R2 usage. This is exceptionally competitive with equivalent Lambda + API Gateway configurations in AWS, which typically cost 5-10x more at the same request volume due to API Gateway's per-request pricing.
Lambda@Edge pricing adds CloudFront's pricing on top of Lambda execution costs. The $0.60/million request fee is for Lambda invocations, but CloudFront itself charges separately for data transfer, cache invalidations, and HTTP requests to the origin. A typical architecture with Lambda@Edge for authentication and header manipulation might see CloudFront charges doubling the visible Lambda cost. Teams that already have CloudFront deployments for other reasons absorb this cost more naturally than teams adopting it purely for edge compute.
Local Development Experience and Testing Workflows
The local development story matters significantly for productivity. Cloudflare Workers' Wrangler CLI provides the most complete local emulation with wrangler dev: it simulates the entire Workers runtime including D1 SQLite databases, KV namespace reads and writes, R2 bucket operations, and Durable Object coordination. Local D1 uses SQLite under the hood, which means your SQL queries behave identically locally and in production. The miniflare package (which Wrangler uses internally) is also available as a standalone library for integration testing, letting you programmatically create Workers contexts and assert on responses in your test suite.
Lambda@Edge's local development experience is the weakest of the three. AWS SAM (Serverless Application Model) can simulate CloudFront + Lambda@Edge locally, but it requires Docker and a complex YAML configuration just to invoke a function. The CloudFront event structure — with its Records[0].cf.request shape — must be constructed manually in test fixtures. Most teams end up testing Lambda@Edge functions by deploying to a staging CloudFront distribution, which adds a 3-10 minute feedback loop. Unit testing individual functions by mocking the event structure works but requires maintaining accurate mock objects as AWS updates the event shape.
Deno Deploy's local development uses the deployctl CLI in development mode or the standard deno run command. Because Deno's runtime is identical between local and production (both use the same V8 isolate model with Deno's standard library), the local development experience closely mirrors production behavior. The main gap is Deno KV: while available locally using a file-backed SQLite store, the production Deno KV uses a distributed Foundation DB backend with different consistency characteristics. For applications that rely heavily on KV read/write ordering guarantees, testing must account for this difference.
Compare edge platform libraries on PkgPulse.
Compare Cloudflare-workers, Lambda-edge, and Deno-deploy package health on PkgPulse.
See also: Cloudflare Workers vs Vercel Edge vs Lambda 2026 and Hono vs itty-router: Edge-First API Frameworks Compared, Best npm Packages for Edge Runtimes in 2026.
See the live comparison
View cloudflare workers vs. lambda edge vs deno deploy on PkgPulse →