Cloudflare Durable Objects vs Upstash vs Turso: Edge Databases 2026
Cloudflare Durable Objects vs Upstash vs Turso: Edge Databases 2026
TL;DR
Edge databases run close to users — eliminating the round-trip penalty of centralized databases in serverless and edge function architectures. Cloudflare Durable Objects are uniquely consistent stateful objects that run inside the Cloudflare network — each object is a single-instance, always-consistent actor with attached SQLite storage, ideal for coordination and real-time state. Upstash is the serverless Redis and Kafka on the edge — HTTP-based, works in Cloudflare Workers and Vercel Edge Functions, perfect for caching, rate limiting, and pub/sub. Turso brings libSQL (SQLite fork) to the edge with per-database branching, embedded replicas, and multi-region sync. For globally consistent stateful coordination: Durable Objects. For edge caching and key-value: Upstash. For relational SQLite at the edge with replicas: Turso.
Key Takeaways
- Durable Objects guarantee single-writer consistency — no conflicts, no distributed locks needed
- Upstash Redis delivers <1ms P99 latency from Cloudflare Workers via REST API (no TCP sockets)
- Turso supports 10,000+ databases per account — database-per-tenant architecture at low cost
- Durable Objects have zero cold starts — always warm within the Cloudflare PoP
- Upstash Kafka enables durable pub/sub in environments (edge) where persistent connections aren't allowed
- Turso embedded replicas let you ship SQLite data inside your app process — zero network latency reads
- All three work in Cloudflare Workers — the comparison is about consistency model and data shape
The Edge Database Problem
Traditional databases assume your compute is nearby:
Traditional: Browser → CDN Edge → Origin Server → Database (centralized)
↑ round-trip adds 50-200ms
Edge compute: Browser → Edge Function (at CDN PoP near user)
↓ database must also be nearby
or you pay latency penalty calling a central DB from the edge
Three different solutions to this problem:
- Durable Objects: Colocate stateful compute with data on Cloudflare's network
- Upstash: Low-latency Redis/Kafka accessible via HTTP from any edge runtime
- Turso: Replicated SQLite database distributed close to edge compute
Cloudflare Durable Objects: Consistent Stateful Edge
Durable Objects are single-instance JavaScript/TypeScript classes running on Cloudflare Workers. Each Object has a unique ID, its own SQLite storage, and guaranteed single-threaded execution — making distributed coordination trivially correct.
When Durable Objects Make Sense
Problem: WebSocket chat room — many users connected to different edge PoPs
Without Durable Objects: Need Redis pub/sub + coordination layer
With Durable Objects: One Durable Object per room = single source of truth
Installation & Setup
# wrangler.toml
name = "my-app"
main = "src/index.ts"
compatibility_date = "2024-09-23"
[[durable_objects.bindings]]
name = "CHAT_ROOM"
class_name = "ChatRoom"
[[migrations]]
tag = "v1"
new_classes = ["ChatRoom"]
Durable Object with SQLite (Hibernatable WebSocket)
// src/chat-room.ts
import { DurableObject } from "cloudflare:workers";
export class ChatRoom extends DurableObject {
private sessions: Map<WebSocket, { userId: string }> = new Map();
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// SQLite storage — persists across object hibernation
this.ctx.storage.sql.exec(`
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
content TEXT NOT NULL,
created_at INTEGER DEFAULT (unixepoch())
);
CREATE INDEX IF NOT EXISTS idx_created ON messages(created_at);
`);
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/ws") {
// Upgrade to WebSocket
const upgradeHeader = request.headers.get("Upgrade");
if (upgradeHeader !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
const { 0: client, 1: server } = new WebSocketPair();
const userId = url.searchParams.get("userId") ?? "anonymous";
this.ctx.acceptWebSocket(server);
this.sessions.set(server, { userId });
// Send recent message history
const history = this.ctx.storage.sql
.exec("SELECT * FROM messages ORDER BY created_at DESC LIMIT 50")
.toArray()
.reverse();
server.send(JSON.stringify({ type: "history", messages: history }));
return new Response(null, { status: 101, webSocket: client });
}
if (url.pathname === "/messages" && request.method === "GET") {
const messages = this.ctx.storage.sql
.exec("SELECT * FROM messages ORDER BY created_at DESC LIMIT 100")
.toArray();
return Response.json(messages);
}
return new Response("Not found", { status: 404 });
}
webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
const { userId } = this.sessions.get(ws) ?? { userId: "unknown" };
const data = JSON.parse(message as string);
// Write to SQLite (always consistent — single-threaded DO)
this.ctx.storage.sql.exec(
"INSERT INTO messages (user_id, content) VALUES (?, ?)",
userId,
data.content
);
// Broadcast to all connected clients in this room
const broadcast = JSON.stringify({
type: "message",
userId,
content: data.content,
timestamp: Date.now(),
});
for (const [client] of this.sessions) {
if (client.readyState === WebSocket.OPEN) {
client.send(broadcast);
}
}
}
webSocketClose(ws: WebSocket) {
this.sessions.delete(ws);
}
}
Worker That Routes to Durable Objects
// src/index.ts
export { ChatRoom } from "./chat-room";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Route /room/:id to a Durable Object instance
const match = url.pathname.match(/^\/room\/([^/]+)/);
if (match) {
const roomId = match[1];
// Get or create the Durable Object for this room ID
const id = env.CHAT_ROOM.idFromName(roomId);
const stub = env.CHAT_ROOM.get(id);
return stub.fetch(request);
}
return new Response("Not found", { status: 404 });
},
};
Durable Object Rate Limiter
// A common pattern: per-user rate limiting with Durable Objects
export class RateLimiter extends DurableObject {
async checkLimit(requests: number, windowMs: number): Promise<boolean> {
const now = Date.now();
const windowStart = now - windowMs;
// SQLite for per-object rate tracking
this.ctx.storage.sql.exec(
"DELETE FROM requests WHERE timestamp < ?",
windowStart
);
const count = this.ctx.storage.sql
.exec("SELECT COUNT(*) as count FROM requests")
.one().count as number;
if (count >= requests) {
return false; // Rate limited
}
this.ctx.storage.sql.exec(
"INSERT INTO requests (timestamp) VALUES (?)",
now
);
return true;
}
}
// In your Worker:
async function isRateLimited(userId: string, env: Env): Promise<boolean> {
const id = env.RATE_LIMITER.idFromName(userId);
const limiter = env.RATE_LIMITER.get(id);
const allowed = await limiter.checkLimit(100, 60_000); // 100 req/min
return !allowed;
}
Upstash: Serverless Redis and Kafka at the Edge
Upstash provides Redis and Kafka via HTTP REST APIs — designed from the ground up for serverless and edge environments where persistent TCP connections don't work.
Installation
npm install @upstash/redis
npm install @upstash/kafka # If using Kafka
npm install @upstash/ratelimit # Rate limiting helper
Redis in Cloudflare Workers
import { Redis } from "@upstash/redis/cloudflare";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Uses env variables for URL + token (set in wrangler.toml)
const redis = Redis.fromEnv(env);
const url = new URL(request.url);
const key = url.searchParams.get("key");
if (!key) return new Response("Missing key", { status: 400 });
// Standard Redis commands via HTTP (no TCP socket needed)
const value = await redis.get(key);
return Response.json({ key, value });
},
};
# wrangler.toml
[vars]
UPSTASH_REDIS_REST_URL = "https://your-redis.upstash.io"
UPSTASH_REDIS_REST_TOKEN = "your-token"
Caching Pattern
import { Redis } from "@upstash/redis/cloudflare";
type Env = {
UPSTASH_REDIS_REST_URL: string;
UPSTASH_REDIS_REST_TOKEN: string;
};
async function getCachedData<T>(
redis: Redis,
key: string,
fetcher: () => Promise<T>,
ttlSeconds = 300
): Promise<T> {
// Check cache
const cached = await redis.get<T>(key);
if (cached !== null) return cached;
// Cache miss — fetch and store
const data = await fetcher();
await redis.setex(key, ttlSeconds, data);
return data;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const redis = Redis.fromEnv(env);
const url = new URL(request.url);
const productId = url.searchParams.get("id");
const product = await getCachedData(
redis,
`product:${productId}`,
() => fetchProductFromDB(productId!),
600 // 10 min TTL
);
return Response.json(product);
},
};
Rate Limiting with @upstash/ratelimit
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis/cloudflare";
type Env = {
UPSTASH_REDIS_REST_URL: string;
UPSTASH_REDIS_REST_TOKEN: string;
};
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const redis = Redis.fromEnv(env);
// Sliding window rate limit: 10 requests per 10 seconds per IP
const ratelimit = new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(10, "10 s"),
analytics: true, // Track usage in Upstash console
});
const ip = request.headers.get("CF-Connecting-IP") ?? "unknown";
const { success, limit, remaining, reset } = await ratelimit.limit(ip);
if (!success) {
return new Response("Too Many Requests", {
status: 429,
headers: {
"X-RateLimit-Limit": String(limit),
"X-RateLimit-Remaining": String(remaining),
"X-RateLimit-Reset": String(reset),
},
});
}
// Continue to application logic
return new Response("OK");
},
};
Upstash Kafka for Edge Pub/Sub
import { Kafka } from "@upstash/kafka";
const kafka = new Kafka({
url: process.env.UPSTASH_KAFKA_REST_URL!,
username: process.env.UPSTASH_KAFKA_REST_USERNAME!,
password: process.env.UPSTASH_KAFKA_REST_PASSWORD!,
});
// Produce — works in Vercel Edge Functions, Cloudflare Workers
const producer = kafka.producer();
await producer.produce("events", { userId, action: "purchase", amount: 99 });
// Consume — long-polling via HTTP
const consumer = kafka.consumer();
const messages = await consumer.consume({
consumerGroupId: "analytics-consumer",
instanceId: "instance-1",
topics: ["events"],
autoOffsetReset: "latest",
});
for (const message of messages) {
console.log(message.value); // Parsed JSON
}
Turso: Replicated SQLite at the Edge
Turso is a database service built on libSQL (a fork of SQLite) with global replication, per-database branching, and embedded replicas. It integrates with Drizzle ORM and the standard SQLite3 library.
Installation
npm install @libsql/client
# Or with Drizzle
npm install drizzle-orm @libsql/client
Basic Setup
import { createClient } from "@libsql/client";
const client = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
// Execute SQL
const result = await client.execute(
"SELECT * FROM users WHERE email = ?",
["user@example.com"]
);
// Batch transactions
await client.batch([
{ sql: "INSERT INTO users (name, email) VALUES (?, ?)", args: ["Alice", "alice@example.com"] },
{ sql: "INSERT INTO profiles (user_id, bio) VALUES (?, ?)", args: [1, "Hello!"] },
], "write");
Turso with Drizzle ORM
import { drizzle } from "drizzle-orm/libsql";
import { createClient } from "@libsql/client";
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
import { eq } from "drizzle-orm";
// Schema
export const users = sqliteTable("users", {
id: integer("id").primaryKey({ autoIncrement: true }),
name: text("name").notNull(),
email: text("email").notNull().unique(),
createdAt: integer("created_at", { mode: "timestamp" })
.$defaultFn(() => new Date()),
});
export const posts = sqliteTable("posts", {
id: integer("id").primaryKey({ autoIncrement: true }),
title: text("title").notNull(),
content: text("content").notNull(),
authorId: integer("author_id").references(() => users.id),
publishedAt: integer("published_at", { mode: "timestamp" }),
});
// Database setup
const tursoClient = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
const db = drizzle(tursoClient, { schema: { users, posts } });
// Queries
const userPosts = await db
.select({
title: posts.title,
authorName: users.name,
})
.from(posts)
.leftJoin(users, eq(posts.authorId, users.id))
.where(eq(posts.authorId, 1));
Embedded Replicas (Zero-Latency Reads)
import { createClient } from "@libsql/client";
// Embedded replica: sync from Turso primary, read from local SQLite file
const client = createClient({
url: "file:./local.db", // Local SQLite file
syncUrl: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
syncInterval: 60, // Sync every 60 seconds
});
// Initial sync
await client.sync();
// Reads hit local SQLite — zero network latency
const users = await client.execute("SELECT * FROM users LIMIT 10");
// Writes go to Turso primary and sync back
await client.execute("INSERT INTO users (name) VALUES (?)", ["Alice"]);
Database-Per-Tenant Pattern
import { createClient } from "@libsql/client";
// Turso supports 10,000+ databases per account — one per customer
async function getTenantDB(tenantId: string) {
// Each tenant has their own isolated database
return createClient({
url: `libsql://${tenantId}.turso.io`,
authToken: await getTenantToken(tenantId),
});
}
// Turso CLI: create databases programmatically
// turso db create tenant-acme --group production
// turso db create tenant-bigco --group production
Feature Comparison
| Feature | Durable Objects | Upstash Redis | Turso |
|---|---|---|---|
| Data model | SQLite + KV | Redis data structures | SQLite (relational) |
| Consistency | ✅ Strong (single-writer) | Eventual (multi-region) | ✅ Strong (primary) |
| Latency | <1ms (Cloudflare PoP) | <1ms (HTTP REST) | 1-10ms (nearest replica) |
| Persistent connections | WebSocket native | ❌ HTTP only | ✅ |
| SQL support | ✅ SQLite | ❌ | ✅ Full SQLite |
| Cloudflare Workers | ✅ Native | ✅ | ✅ |
| Vercel Edge | ❌ | ✅ | ✅ |
| Multi-region reads | Single PoP per object | ✅ Global | ✅ Replica groups |
| Pub/Sub | WebSocket broadcast | ✅ | ❌ |
| Rate limiting | ✅ Via DO pattern | ✅ @upstash/ratelimit | ❌ |
| Branching | ❌ | ❌ | ✅ Per-database |
| Free tier | Included in Workers | 10,000 req/day | 500 databases |
| Pricing model | Per request + storage | Per request | Per row read/write |
When to Use Each
Choose Durable Objects if:
- You're building on Cloudflare Workers and need stateful coordination (chat rooms, game state, collaborative editing)
- Strong consistency is required without distributed locking complexity
- Real-time WebSocket applications where all connections to a room must share state
- Rate limiting or per-user state with guaranteed consistency
Choose Upstash if:
- You need Redis semantics (lists, sorted sets, pub/sub, streams) at the edge
- You're on Vercel Edge, Cloudflare Workers, or any HTTP-capable edge runtime
- Caching, session storage, or rate limiting without managing Redis infrastructure
- You need Kafka for durable event streaming from edge functions
Choose Turso if:
- You need a full SQLite relational database accessible from edge functions
- Multi-tenant SaaS architecture — one database per customer with branching for preview environments
- You want embedded replicas for zero-latency reads in long-running processes
- You're already using Drizzle + SQLite and want to scale beyond a single file
Methodology
Data sourced from official Cloudflare Workers documentation (Durable Objects), Upstash blog and pricing pages, Turso documentation and benchmark results (as of February 2026), and community reports from the Cloudflare Discord and Hacker News. Pricing verified against each provider's pricing page. Latency figures from official benchmark reports.
Related: pgvector vs Qdrant vs Weaviate for AI-focused database choices, or SST v3 vs Serverless Framework vs AWS CDK for serverless infrastructure tooling.