Skip to main content

Valkey vs KeyDB vs Dragonfly: Redis Alternatives in 2026

·PkgPulse Team

Valkey vs KeyDB vs Dragonfly: Redis Alternatives in 2026

TL;DR

Redis's license change from BSD to SSPL in March 2024 triggered a fork explosion. Valkey is the Linux Foundation-backed fork — the most direct Redis replacement, already in production at AWS (Elasticache Serverless), Google Cloud Memorystore, and Akamai. KeyDB (acquired by Snap) is the multi-threaded variant — it uses Redis protocol but runs on all CPU cores, making it 2–5x faster than Redis for throughput-heavy workloads. Dragonfly is the full rewrite — built in C++ with modern architecture (fibers, SIMD, shared-nothing), targeting 25x better throughput than Redis on the same hardware. For drop-in Redis replacement: Valkey. For maximum throughput on existing hardware: Dragonfly. For multi-threaded Redis on Kubernetes: KeyDB.

Key Takeaways

  • Valkey is now the default on major cloud providers — AWS Elasticache, Google Memorystore, and Akamai all switched in 2024-2025
  • Dragonfly claims 25x Redis throughput — 3.8M ops/sec vs ~150k for Redis on the same 8-core machine
  • All three are 100% Redis protocol compatibleioredis, node-redis, and all Redis clients work without changes
  • Valkey GitHub stars: ~20k — the fastest-growing Redis fork (donated to Linux Foundation)
  • KeyDB supports FLASH storage — hot/warm/cold tiering for datasets larger than RAM
  • Dragonfly uses 80% less memory than Redis for the same dataset (compression + modern memory management)
  • Migration is a swap — change the Redis URL in your connection string; no code changes needed

The Redis License Crisis

In March 2024, Redis Ltd. changed Redis's license from BSD to Server Side Public License (SSPL) — meaning cloud providers can no longer offer Redis as a managed service without a commercial agreement. This triggered:

  1. Valkey — Linux Foundation fork, backed by AWS, Google, Oracle, Snap, and others
  2. KeyDB — Already existed as a Redis fork by Snap, gained renewed attention
  3. Dragonfly — Already existed as a Redis rewrite, accelerated adoption

The major cloud providers have now defaulted to Valkey:

  • AWS Elasticache Serverless → Valkey (automatic migration)
  • Google Cloud Memorystore → Valkey available
  • Akamai Linode Managed Databases → Valkey

Valkey: The Official Linux Foundation Fork

Valkey is Redis 7.2.4 forked at the exact point before the license change. The Linux Foundation maintains it with contributions from AWS, Google, Oracle, Snap, Ericsson, and others. If you used Redis, Valkey is identical in behavior.

Docker Setup

# Valkey — drop-in Redis replacement
docker run -d --name valkey \
  -p 6379:6379 \
  valkey/valkey:latest

# With persistence
docker run -d --name valkey \
  -p 6379:6379 \
  -v valkey-data:/data \
  valkey/valkey:latest \
  valkey-server --save 60 1 --loglevel warning

Node.js Client (Zero Changes from Redis)

// ioredis — works with Valkey, zero changes
import Redis from "ioredis";

const client = new Redis({
  host: "localhost",
  port: 6379,
  // That's it — same connection string as Redis
});

// All Redis commands work identically
await client.set("user:1:name", "Alice", "EX", 3600);
const name = await client.get("user:1:name");

// Hash operations
await client.hset("user:1", { email: "alice@example.com", plan: "pro" });
const user = await client.hgetall("user:1");

// Sorted sets
await client.zadd("leaderboard", 1500, "alice", 1200, "bob", 900, "charlie");
const top3 = await client.zrevrange("leaderboard", 0, 2, "WITHSCORES");

// Pub/Sub
const sub = new Redis();
sub.subscribe("notifications");
sub.on("message", (channel, message) => {
  console.log(`[${channel}] ${message}`);
});

const pub = new Redis();
pub.publish("notifications", JSON.stringify({ event: "order_created", orderId: 123 }));

Valkey-Specific: Multi-Exec Improvements

// Valkey 8.0+ added improvements to MULTI/EXEC reliability
const pipeline = client.multi();
pipeline.set("key1", "value1");
pipeline.incr("counter");
pipeline.expire("key1", 60);

const results = await pipeline.exec();
// results: [[null, 'OK'], [null, 1], [null, 1]]

High Availability with Valkey Sentinel

# docker-compose.yml — Valkey with Sentinel HA
version: "3.8"

services:
  valkey-primary:
    image: valkey/valkey:latest
    ports:
      - "6379:6379"
    command: valkey-server --save 60 1

  valkey-replica:
    image: valkey/valkey:latest
    command: valkey-server --replicaof valkey-primary 6379

  sentinel:
    image: valkey/valkey:latest
    command: >
      valkey-sentinel /etc/valkey/sentinel.conf
    volumes:
      - ./sentinel.conf:/etc/valkey/sentinel.conf
// Connect to Sentinel cluster
const client = new Redis({
  sentinels: [
    { host: "sentinel-1", port: 26379 },
    { host: "sentinel-2", port: 26379 },
    { host: "sentinel-3", port: 26379 },
  ],
  name: "mymaster",  // Sentinel master name
});

KeyDB: Multi-Threaded Redis

KeyDB runs Redis protocol but uses a multi-threaded event loop instead of Redis's single-threaded design. On 8 cores, KeyDB processes requests on all cores simultaneously — 2-5x throughput improvement for CPU-bound workloads.

Docker Setup

docker run -d --name keydb \
  -p 6379:6379 \
  eqalpha/keydb:latest \
  keydb-server --server-threads 4  # Use 4 threads

# Check thread usage
docker exec -it keydb keydb-cli INFO server | grep threads

Multi-Threading Configuration

# keydb.conf — key performance settings
server-threads 4        # Number of threads (match CPU cores)
server-thread-affinity yes  # Pin threads to CPU cores

# FLASH tiering (unique to KeyDB)
storage-provider flash /path/to/flash
db-s3-object mybucket   # S3-backed cold storage

# Active replication (unique to KeyDB)
active-replica yes      # Active-active multi-master
replica-read-only no    # Replicas can accept writes

Active Replication (Multi-Master)

// KeyDB's unique active replication — write to any node
const primary = new Redis({ host: "keydb-1", port: 6379 });
const replica = new Redis({ host: "keydb-2", port: 6379 });

// Both accept writes and sync bidirectionally
await primary.set("key", "from-primary");
await replica.set("key2", "from-replica");

// Both nodes have both keys after sync
const val1 = await replica.get("key");    // "from-primary"
const val2 = await primary.get("key2");  // "from-replica"

FLASH Tiering for Large Datasets

// KeyDB FLASH — hot data in RAM, cold data on NVMe/SSD
// No code changes needed — transparent to client

// Configure in keydb.conf:
// storage-provider flash /mnt/nvme/keydb
// db-s3-object s3-bucket-name  (optional cold tier)

// Data is automatically tiered based on access patterns
// Hot keys → RAM
// Warm keys → NVMe FLASH
// Cold keys → S3 (optional)

// From Node.js perspective, it's identical to Redis
await client.set("hot-data", "frequently accessed");   // Stays in RAM
await client.set("cold-data", "rarely accessed data"); // May move to FLASH

Dragonfly: The Full Rewrite

Dragonfly is a ground-up rewrite of Redis in C++ using modern concurrency primitives (fibers/coroutines) and a shared-nothing architecture. Each CPU core has its own memory shard, eliminating lock contention. The result is claimed 25x better throughput and 80% less memory than Redis.

Docker Setup

docker run -d --name dragonfly \
  -p 6379:6379 \
  -v dragonfly-data:/data \
  docker.dragonflydb.io/dragonflydb/dragonfly:latest

# With memory limits and threads
docker run -d --name dragonfly \
  -p 6379:6379 \
  --ulimit memlock=-1 \
  docker.dragonflydb.io/dragonflydb/dragonfly:latest \
  --maxmemory 4gb \
  --threads 8

Node.js Client — Identical API

// Dragonfly is 100% Redis-compatible — same client code
import Redis from "ioredis";

const client = new Redis({
  host: "localhost",
  port: 6379,
});

// All Redis data structures work
await client.set("session:abc", JSON.stringify({ userId: 123 }), "EX", 1800);

// Sorted sets — one of Dragonfly's fastest operations
await client.zadd("scores", 9500, "alice", 8200, "bob");
const scores = await client.zrange("scores", 0, -1, "WITHSCORES");

// Streams (Dragonfly implements Redis Streams)
await client.xadd("events", "*", "type", "click", "page", "/home");
const events = await client.xrange("events", "-", "+", "COUNT", 10);

Dragonfly-Specific: Lua Scripts Performance

// Dragonfly executes Lua scripts across shards efficiently
// Useful for atomic multi-key operations

const luaScript = `
  local current = redis.call('GET', KEYS[1])
  if current == false then
    redis.call('SET', KEYS[1], ARGV[1])
    return ARGV[1]
  end
  return current
`;

// defineCommand with evalsha pattern
client.defineCommand("getOrSet", {
  numberOfKeys: 1,
  lua: luaScript,
});

// @ts-ignore — custom command
const value = await client.getOrSet("mykey", "default-value");

Performance Benchmarks

Benchmark: 100% GET operations, 8-byte key, 64-byte value
Hardware: 8-core AWS c6g.2xlarge, 16 GB RAM

Operations per second (ops/sec):
Redis 7.2:           ~150,000 ops/sec   (single-threaded)
Valkey 8.0:          ~160,000 ops/sec   (+7% over Redis)
KeyDB 6.3:           ~450,000 ops/sec   (4 threads, 3x Redis)
Dragonfly 1.x:       ~3,800,000 ops/sec (25x Redis claim)

Memory for 10M small keys (5 byte value):
Redis:               ~700 MB
Valkey:              ~700 MB  (same codebase)
KeyDB:               ~700 MB  (same codebase)
Dragonfly:           ~140 MB  (80% reduction)

P99 latency at 100k req/sec:
Redis:               ~0.5ms
Valkey:              ~0.5ms
KeyDB:               ~0.3ms
Dragonfly:           ~0.1ms

Note: Benchmarks vary significantly by workload type.
Dragonfly's advantage is most pronounced at high concurrency.

Feature Comparison

FeatureValkeyKeyDBDragonfly
Redis protocol✅ 100%✅ 100%✅ 100%
Redis clients compatibility
Multi-threading✅ (v8.0+)✅ Native✅ Sharded
Active replication✅ Multi-master
FLASH tiering
Memory efficiencyBaselineBaseline✅ ~80% less
Redis Modules APIPartial
Cluster support
Sentinel support
Persistence (RDB/AOF)
LicenseBSD-3BSD-3BSL 1.1
Backed byLinux FoundationSnapDragonfly DB Inc.
Cloud managedAWS, GCP, AkamaiSelf-hostedDragonfly Cloud
GitHub stars20k25k29k

When to Use Each

Choose Valkey if:

  • You're running on AWS Elasticache, Google Cloud Memorystore, or Akamai — it's already the default
  • You want the most conservative Redis replacement with no surprises
  • Redis Modules (RedisSearch, RedisJSON, RedisGraph) compatibility matters
  • Your primary goal is license compliance, not performance

Choose KeyDB if:

  • You need active-active multi-master replication (unique feature)
  • Your dataset exceeds RAM and NVMe tiering via FLASH is attractive
  • CPU is your bottleneck on a multi-core machine and multi-threading helps
  • You're on a single server and want maximum throughput without cluster complexity

Choose Dragonfly if:

  • Memory cost is the bottleneck (80% reduction can save significant cloud spend)
  • You need extreme throughput at low latency on a single node
  • You're doing a greenfield deployment and want modern architecture
  • Redis Modules are not required (Dragonfly doesn't support the Modules API)

Migration Path

// All three are drop-in replacements — change the connection URL

// Before (Redis)
const redis = new Redis({ host: "redis.internal", port: 6379 });

// After (Valkey/KeyDB/Dragonfly — same config)
const redis = new Redis({ host: "valkey.internal", port: 6379 });

// Environment variable approach (recommended)
const redis = new Redis({ host: process.env.REDIS_HOST, port: 6379 });
// → Change REDIS_HOST from "redis.internal" to "valkey.internal"

Methodology

Data sourced from GitHub repositories (star counts as of February 2026), official benchmarks (Dragonfly DB benchmark suite, KeyDB benchmarks), cloud provider announcements (AWS Elasticache, Google Cloud Memorystore), and community performance reports on Hacker News and r/redis. Performance numbers are from official vendor benchmarks and should be verified for your specific workload.


Related: ioredis vs node-redis vs Upstash for Redis client comparison, or BullMQ vs Bee-Queue vs pg-boss for job queue implementations on top of Redis.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.