lru-cache vs node-cache vs quick-lru: In-Memory Caching in Node.js (2026)
TL;DR
lru-cache is the standard choice — battle-tested, TypeScript-native, excellent size/TTL controls, and used by npm and thousands of packages as a dependency. node-cache is straightforward if you need TTL-first semantics without LRU eviction complexity. quick-lru is the minimalist option — zero dependencies, tiny, perfect for embedding in libraries. For most Node.js applications, lru-cache v10+ is the default recommendation.
Key Takeaways
- lru-cache: ~200M weekly downloads (transitive) — npm's own cache, enterprise-grade, TypeScript-native
- node-cache: ~1.5M weekly downloads — TTL-first design, simple get/set/delete API, events
- quick-lru: ~100M weekly downloads (transitive) — zero deps, ESM-native, great for library authors
- In-process cache is best for: computed values, parsed configs, hot data with low cardinality
- Use Redis instead when: data must persist restarts, multiple processes share state, cache > 1GB
- lru-cache v10 is a complete rewrite — much better TypeScript types than v7/v8
Download Trends
| Package | Weekly Downloads | Approach | TTL | Events |
|---|---|---|---|---|
lru-cache | ~200M (transitive) | LRU + size/TTL | ✅ | ✅ |
quick-lru | ~100M (transitive) | LRU only | ❌ | ❌ |
node-cache | ~1.5M | TTL-first | ✅ | ✅ |
Note: lru-cache and quick-lru are transitively downloaded by thousands of packages — weekly numbers reflect the full dependency tree.
When to Use In-Process Cache vs Redis
In-Process Cache (lru-cache, node-cache):
✅ Single process / single server
✅ Data fits in memory (< 100MB typical)
✅ Cache miss is cheap (fast computation or small DB query)
✅ Data doesn't need to survive restarts
✅ Sub-millisecond access required
❌ Multiple server instances (cache is not shared)
Redis:
✅ Multiple processes or servers
✅ Cache needs to survive restarts
✅ Large cache (> available RAM)
✅ Shared rate limiting, sessions
❌ Adds network latency (~1-5ms per operation)
lru-cache
lru-cache is the most feature-complete option — used internally by npm itself for package manifest caching.
Basic Usage
import { LRUCache } from "lru-cache"
// Size-based cache (limits total memory, not item count):
const cache = new LRUCache<string, string>({
max: 500, // Max 500 items
maxSize: 50_000_000, // Max 50MB total
sizeCalculation: (value) => Buffer.byteLength(value), // Size per item
ttl: 1000 * 60 * 5, // 5 minute TTL
allowStale: false, // Return undefined if expired (default)
updateAgeOnGet: false, // Don't refresh TTL on read
updateAgeOnHas: false,
})
// Store package data:
cache.set("react", JSON.stringify(reactData))
// Retrieve:
const data = cache.get("react") // string | undefined
if (data) {
return JSON.parse(data)
}
Typed Cache with Objects
interface PackageData {
name: string
weeklyDownloads: number
version: string
fetchedAt: number
}
const packageCache = new LRUCache<string, PackageData>({
max: 1000,
ttl: 1000 * 60 * 10, // 10 minutes
// No sizeCalculation needed for object count limit
})
// Fetch-or-cache pattern:
async function getPackage(name: string): Promise<PackageData> {
const cached = packageCache.get(name)
if (cached) return cached
const data = await fetchFromNpm(name)
packageCache.set(name, data)
return data
}
Advanced: TTL Per Item
// Override TTL for specific items:
const cache = new LRUCache<string, PackageData>({
max: 1000,
ttl: 1000 * 60 * 5, // Default: 5 minutes
})
// High-traffic packages get shorter TTL (update more frequently):
cache.set("react", reactData, { ttl: 1000 * 60 }) // 1 minute
// Rarely changing packages get longer TTL:
cache.set("is-odd", isOddData, { ttl: 1000 * 60 * 60 }) // 1 hour
Fetch Pattern (Built-in Async Loading)
lru-cache v10 has a built-in fetchMethod that handles concurrent fetches — prevents the thundering herd problem:
const cache = new LRUCache<string, PackageData>({
max: 500,
ttl: 1000 * 60 * 5,
allowStale: true, // Return stale data while revalidating
// Runs once per key even with concurrent requests:
fetchMethod: async (name, staleValue, { signal }) => {
const response = await fetch(`https://registry.npmjs.org/${name}`, { signal })
if (!response.ok) throw new Error(`npm fetch failed: ${response.status}`)
return response.json()
},
})
// Multiple concurrent calls with same key = single fetch:
const [a, b, c] = await Promise.all([
cache.fetch("react"), // → one HTTP request
cache.fetch("react"), // → waits for same request
cache.fetch("react"), // → waits for same request
])
Cache Statistics
// Inspect cache state:
console.log(cache.size) // Current item count
console.log(cache.calculatedSize) // Total size (if sizeCalculation set)
// Iterate over entries:
for (const [key, value] of cache.entries()) {
console.log(key, value)
}
// Get remaining TTL:
const ttl = cache.getRemainingTTL("react") // ms remaining, 0 if expired/not set
// Peek without affecting LRU order:
const value = cache.peek("react")
// Invalidate:
cache.delete("react")
cache.clear()
node-cache
node-cache has a simpler API focused on TTL semantics — closer to a traditional key-value cache interface.
import NodeCache from "node-cache"
const cache = new NodeCache({
stdTTL: 300, // Default TTL: 5 minutes (in seconds, not ms)
checkperiod: 600, // Run cleanup every 10 minutes
useClones: true, // Clone objects on get/set (prevents mutation bugs)
deleteOnExpire: true,
maxKeys: -1, // No limit by default
})
// Set with default TTL:
cache.set("react", reactData)
// Set with custom TTL:
cache.set("react", reactData, 3600) // 1 hour
// Get (returns undefined if expired or missing):
const data = cache.get<PackageData>("react")
// Get multiple:
const results = cache.mget<PackageData>(["react", "vue", "angular"])
// Returns: { react: PackageData, vue: PackageData, angular: undefined }
// Check existence without getting:
const exists = cache.has("react")
// Get with TTL info:
const ttl = cache.getTtl("react") // Unix timestamp when key expires
// Delete:
cache.del("react")
cache.del(["react", "vue"]) // Batch delete
// Flush:
cache.flushAll()
// Stats:
const stats = cache.getStats()
// { hits: 100, misses: 20, keys: 50, ksize: 5000, vsize: 100000 }
node-cache events:
// Event-driven cache management:
cache.on("set", (key, value) => {
console.log(`Cached: ${key}`)
})
cache.on("del", (key, value) => {
console.log(`Evicted: ${key}`)
})
cache.on("expired", (key, value) => {
// Trigger background refresh:
refreshInBackground(key)
})
cache.on("flush", () => {
console.log("Cache cleared")
})
node-cache limitations:
- No built-in LRU eviction — items expire by TTL only, not by access frequency
- Memory can grow unbounded without
maxKeysor manual eviction - Clone overhead can be significant for large objects (disable with
useClones: false)
quick-lru
quick-lru (by Sindre Sorhus) is zero-dependency, ESM-native, and designed for embedding in libraries.
import QuickLRU from "quick-lru"
// Count-based LRU — no TTL support:
const cache = new QuickLRU<string, PackageData>({
maxSize: 1000, // Max items (evicts least-recently-used when full)
maxAge: 300_000, // Optional TTL in milliseconds (v7+)
onEviction: (key, value) => {
console.log(`Evicted: ${key}`)
},
})
// Standard Map-like API:
cache.set("react", reactData)
const data = cache.get("react") // PackageData | undefined
const exists = cache.has("react")
cache.delete("react")
cache.clear()
// Size:
console.log(cache.size)
// Iteration (LRU order):
for (const [key, value] of cache) {
console.log(key)
}
quick-lru in a library:
// Installing lru-cache in your users' apps is 100KB
// quick-lru is 3KB — much better for library authors:
import QuickLRU from "quick-lru"
// Cache resolved package names internally:
const resolvedCache = new QuickLRU<string, string>({ maxSize: 100 })
export async function resolvePackage(name: string): Promise<string> {
if (resolvedCache.has(name)) return resolvedCache.get(name)!
const resolved = await computeResolution(name)
resolvedCache.set(name, resolved)
return resolved
}
Feature Comparison
| Feature | lru-cache | node-cache | quick-lru |
|---|---|---|---|
| LRU eviction | ✅ | ❌ | ✅ |
| TTL support | ✅ | ✅ | ✅ (v7+) |
| Per-item TTL | ✅ | ✅ | ❌ |
| Size-based limit | ✅ (bytes) | ⚠️ (count) | ⚠️ (count) |
| Async fetchMethod | ✅ | ❌ | ❌ |
| Events | ✅ | ✅ | ✅ (onEviction) |
| TypeScript | ✅ Native | ✅ | ✅ ESM |
| Zero dependencies | ❌ | ❌ | ✅ |
| ESM native | ✅ | ✅ | ✅ |
| Bundle size | ~100KB | ~50KB | ~3KB |
When to Use Each
Choose lru-cache if:
- You need the most feature-complete in-memory cache
- Thundering herd protection (built-in
fetchMethoddeduplication) - Memory-size-based limits are required (not just item count)
- You want per-item TTL overrides
Choose node-cache if:
- TTL semantics are your primary concern (not LRU)
- Cache events for monitoring are important
- The API's intuitive get/set/expire model fits your mental model
- You don't need LRU eviction
Choose quick-lru if:
- You're a library author and bundle size matters
- Zero-dependency is a requirement
- You need a simple LRU with optional TTL
- The clean ESM-native API is preferred
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on lru-cache v10.x, node-cache v5.x, and quick-lru v7.x.