Cloudflare Workers vs Lambda@Edge vs Deno Deploy: Edge Computing 2026
Cloudflare Workers runs your code in 300+ locations with zero cold starts — V8 isolates start in under 1ms. Lambda@Edge runs in ~20 CloudFront edge locations with 100-1000ms cold starts for container-based functions. Deno Deploy runs in 35+ regions with Deno 2's improved npm compatibility. Edge computing in 2026 is past the hype phase — these are production platforms with real tradeoffs.
TL;DR
Cloudflare Workers for the fastest edge execution globally — zero cold starts, 300+ PoPs, excellent developer tooling (Wrangler), and a full platform ecosystem (KV, D1, R2, Durable Objects). Lambda@Edge when you're deeply invested in AWS and need to manipulate CloudFront distributions — not truly edge in the compute sense. Deno Deploy for Deno-first development with excellent TypeScript tooling, sub-second deployments, and Web Standards compliance. For most new edge computing projects, Cloudflare Workers is the default choice.
Key Takeaways
- Cloudflare Workers: 300+ global PoPs, zero cold starts (<1ms), V8 isolates, 50ms CPU limit (free)
- Lambda@Edge: ~20 CloudFront locations, 100-1000ms cold starts, Node.js or Python, 5 second limit
- Deno Deploy: 35+ regions, near-zero cold starts (Deno V8 isolates), Deno 2 npm compatibility
- Workers free tier: 100K requests/day, 10ms CPU time per request
- Lambda@Edge pricing: $0.60/million requests + execution duration
- Deno Deploy free tier: 100K requests/day, 50ms CPU
- All three: Web Standards (Fetch, Request, Response, URL) as the API
The Edge Computing Landscape
"Edge" means different things:
CDN edge (Lambda@Edge): Code runs on CDN nodes, but often still with containers and cold start overhead. Good for cache manipulation and simple header transformations.
True edge compute (Cloudflare Workers, Deno Deploy): V8 isolates that spin up in microseconds, no container overhead, run at hundreds of locations globally.
The V8 isolate model is the key architectural difference. Isolates share a V8 engine instance, so spinning up a new execution context costs less than 1ms — compared to containers which need seconds.
Cloudflare Workers
Runtime: V8 isolates (Workers runtime) Locations: 300+ global Points of Presence Free tier: 100K requests/day, 10ms CPU, 128 MB memory Paid: $5/month + $0.50 per million requests
What Workers Can Do
// Basic Worker — runs at 300+ edge locations
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === '/api/users') {
const users = await env.DB.prepare(
'SELECT * FROM users LIMIT 100'
).all();
return Response.json(users.results);
}
return new Response('Not found', { status: 404 });
},
};
Hono on Workers
// index.ts — Hono is the standard framework for Workers
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { logger } from 'hono/logger';
const app = new Hono<{ Bindings: Env }>();
app.use('*', cors());
app.use('*', logger());
app.get('/api/users/:id', async (c) => {
const id = c.req.param('id');
const user = await c.env.DB
.prepare('SELECT * FROM users WHERE id = ?')
.bind(id)
.first();
if (!user) return c.json({ error: 'Not found' }, 404);
return c.json(user);
});
app.post('/api/users', async (c) => {
const body = await c.req.json();
const { meta } = await c.env.DB
.prepare('INSERT INTO users (name, email) VALUES (?, ?)')
.bind(body.name, body.email)
.run();
return c.json({ id: meta.last_row_id }, 201);
});
export default app;
Workers Platform Ecosystem
// env.d.ts — Cloudflare platform bindings
interface Env {
// D1: SQLite database at the edge
DB: D1Database;
// KV: Key-value store at the edge
CACHE: KVNamespace;
// R2: S3-compatible object storage
BUCKET: R2Bucket;
// Durable Objects: stateful at the edge
COUNTER: DurableObjectNamespace;
// AI: Run inference at the edge
AI: Ai;
}
Workers isn't just a runtime — it's a platform:
- D1: SQLite database running at the edge (global replication)
- KV: Global key-value storage with millisecond reads
- R2: S3-compatible blob storage with no egress fees
- Durable Objects: Stateful coordination at the edge
- AI: Run models (LLaMA, Whisper, SDXL) at the edge without external API calls
- Queues: Message queuing between Workers
- Cron Triggers: Scheduled Workers execution
Wrangler: Local Development
# Wrangler accurately emulates the entire platform locally
npm install -D wrangler
npx wrangler dev # Local server with D1, KV, R2 emulation
# Deploy to production
npx wrangler deploy
# Tail real-time logs
npx wrangler tail
Workers Limits
| Limit | Free | Paid |
|---|---|---|
| CPU time per request | 10ms | 50ms (default), 30s (paid+) |
| Memory | 128 MB | 128 MB |
| Request size | 100 MB | 500 MB |
| Requests per day | 100K | Unlimited ($0.50/M) |
| Script size | 1 MB | 10 MB |
Workers Limitations
- 50ms CPU limit (not wall time — I/O doesn't count)
- No persistent local filesystem
- No native Node.js APIs (must use Web Standards equivalents)
- Workers can't run long computational tasks
Lambda@Edge
Runtime: Node.js 20, Python 3.12 (AWS Lambda) Locations: ~20 CloudFront edge locations Pricing: $0.60/million requests + $0.00005001/GB-second
Architecture
Lambda@Edge attaches Lambda functions to CloudFront distributions at four event types:
// Viewer Request: Runs on every request before cache check
// Origin Request: Runs on cache miss, before hitting origin
// Origin Response: Runs after origin response, before caching
// Viewer Response: Runs on every response to the viewer
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
// Add authentication header
if (!headers.authorization) {
return {
status: '403',
statusDescription: 'Forbidden',
body: 'Unauthorized',
};
}
return request; // Pass through
};
What Lambda@Edge Is Actually Good For
// Rewriting URLs for A/B testing
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
if (Math.random() < 0.5) {
request.uri = '/variant-b' + request.uri;
}
return request;
};
// Adding security headers
exports.handler = async (event) => {
const response = event.Records[0].cf.response;
response.headers['strict-transport-security'] = [{
key: 'Strict-Transport-Security',
value: 'max-age=31536000; includeSubdomains; preload',
}];
response.headers['x-content-type-options'] = [{
key: 'X-Content-Type-Options',
value: 'nosniff',
}];
return response;
};
Lambda@Edge Limitations
- ~20 locations (not truly global edge — more like "regional edge")
- Cold starts: Container-based, 100ms-1s cold start
- Complexity: Must deploy through AWS CloudFormation or SAM
- No persistent state: Each Lambda is stateless
- Region restriction: Functions must be in us-east-1 and replicated
- Node.js only at the edge: Python/other runtimes not available in all event types
Lambda@Edge is best for CloudFront-specific manipulation — not for building full APIs.
Deno Deploy
Runtime: Deno 2 (V8 isolates) Locations: 35+ global regions Free tier: 100K requests/day, 50ms CPU Paid: $20/month (Pro) Creator: Deno Land Inc.
What Sets Deno Deploy Apart
Deno 2 brings improved npm compatibility — the major historical limitation of Deno. Most npm packages now work:
// Deno Deploy — TypeScript is native, no config
import { Hono } from "npm:hono"; // npm packages work
import { serve } from "https://deno.land/std/http/server.ts"; // Or Deno URLs
const app = new Hono();
app.get("/", (c) => c.text("Hello from Deno Deploy!"));
app.get("/api/users", async (c) => {
// Fetch from external API (Web Standards)
const response = await fetch("https://jsonplaceholder.typicode.com/users");
const users = await response.json();
return c.json(users);
});
Deno.serve(app.fetch);
Deployment Speed
Deno Deploy's headline feature: sub-second deployments.
# Install Deno Deploy CLI
npm install -g deployctl
# Deploy (< 1 second to propagate globally)
deployctl deploy --project=my-project main.ts
# Deployed in 0.3 seconds
# Compare to other platforms:
# Cloudflare Workers: ~5-30s
# Vercel Edge: ~30-60s
# Lambda@Edge: ~90-300s
Deno 2: npm Compatibility
// Deno 2 npm compatibility
import express from "npm:express"; // Works!
import { z } from "npm:zod"; // Works!
import { PrismaClient } from "npm:@prisma/client"; // Works (with caveats)
// Standard library (Deno URLs)
import { delay } from "jsr:@std/async/delay";
// Or use package.json + import map:
import { Hono } from "hono"; // Node.js-style import
Limitations
- Smaller ecosystem than Cloudflare Workers (fewer built-in storage options)
- Deno compatibility issues with some complex npm packages
- Less community adoption and fewer tutorials than Workers
- No equivalent to Cloudflare's D1, R2, KV as first-party products
Platform Comparison
| Feature | Cloudflare Workers | Lambda@Edge | Deno Deploy |
|---|---|---|---|
| Global PoPs | 300+ | ~20 | 35+ |
| Cold starts | Zero (<1ms) | 100-1000ms | Near-zero |
| Free requests/day | 100K | ~1M (varies) | 100K |
| CPU limit | 50ms paid, 10ms free | 5 seconds | 50ms |
| Built-in storage | D1, KV, R2, DO | S3, DynamoDB (manual) | Deno KV |
| Framework | Hono (standard) | Node.js frameworks | Hono, Fresh |
| TypeScript | Via wrangler/esbuild | Via Lambda layers | Native |
| npm compat | ~95% | Full (Node.js) | ~90% (Deno 2) |
| Deployment speed | ~10-30s | ~90-300s | <1s |
Choosing Your Edge Platform
Choose Cloudflare Workers if:
- Maximum global coverage (300+ locations) matters
- You need the full platform: D1, KV, R2, Durable Objects, AI
- Zero cold starts are required
- You're building production APIs, not just CDN manipulation
Choose Lambda@Edge if:
- You're heavily invested in AWS and using CloudFront
- You need CloudFront-specific manipulation (URL rewrites, header injection)
- You want full Node.js compatibility at the CDN layer
- Your use case is cache control and CDN manipulation, not full applications
Choose Deno Deploy if:
- You're using Deno for development
- Sub-second global deployments are important (CI/CD)
- TypeScript-native development without build steps is preferred
- You want Web Standards compliance with good npm compatibility
Compare edge platform libraries on PkgPulse.
See the live comparison
View cloudflare workers vs. lambda edge vs deno deploy on PkgPulse →