piscina vs tinypool vs workerpool: Worker Thread Pools in Node.js (2026)
TL;DR
piscina is the high-performance worker pool built by the Node.js team — supports transferable objects, task cancellation via AbortController, memory-efficient communication, and automatic worker recycling. tinypool is the minimal fork of piscina — used internally by Vitest, smaller API surface, ESM-native, same core functionality. workerpool is the mature cross-environment pool — works in Node.js AND browser (via Web Workers), offers named function execution, and statistics. In 2026: piscina for production Node.js services, tinypool if you want the lightest option, workerpool if you need browser compatibility.
Key Takeaways
- piscina: ~3M weekly downloads — Node.js team, transferable objects, histogram metrics
- tinypool: ~8M weekly downloads — fork of piscina, powers Vitest, ESM-first, minimal
- workerpool: ~3M weekly downloads — cross-environment (Node + browser), mature, named functions
- All three manage a pool of worker threads — submit tasks, get results, avoid main thread blocking
- Worker threads share memory via SharedArrayBuffer and transfer ownership via Transferable
- tinypool's download count is high because Vitest depends on it
Why Worker Thread Pools?
Problem: Node.js main thread handles I/O AND computation
CPU-intensive tasks block the event loop:
Client A: GET /api/report → generating PDF (5 seconds) → event loop blocked
Client B: GET /api/health → waiting... waiting... timeout!
Solution: Worker thread pool
Main thread (event loop):
Client A: POST /api/report → submit to pool → continue handling other requests
Client B: GET /api/health → responds immediately ✅
Worker pool (background threads):
Worker 1: generating PDF...
Worker 2: resizing image...
Worker 3: parsing CSV...
Worker 4: idle (ready for next task)
Good candidates for worker threads:
- PDF generation
- Image processing (sharp, canvas)
- CSV/Excel parsing (large files)
- Crypto operations (bcrypt, scrypt)
- Data transformation (large JSON → aggregate)
- Code compilation/transpilation
piscina
piscina — high-performance worker pool:
Basic setup
// worker.ts — the code that runs in each worker thread:
export default function processPackage(data: { name: string; downloads: number[] }) {
// CPU-intensive calculation in background thread:
const avg = data.downloads.reduce((a, b) => a + b, 0) / data.downloads.length
const stdDev = Math.sqrt(
data.downloads.reduce((sum, d) => sum + (d - avg) ** 2, 0) / data.downloads.length
)
return { name: data.name, average: avg, stdDev, trend: avg > 100000 ? "growing" : "stable" }
}
// main.ts — submit tasks to the pool:
import Piscina from "piscina"
import { resolve } from "node:path"
const pool = new Piscina({
filename: resolve(__dirname, "worker.js"),
maxThreads: 4, // Max 4 workers
minThreads: 2, // Keep 2 alive
idleTimeout: 30_000, // Kill idle workers after 30s
})
// Submit tasks:
const result = await pool.run({
name: "react",
downloads: [5_000_000, 5_100_000, 4_900_000, 5_200_000],
})
console.log(result) // { name: "react", average: 5050000, stdDev: ..., trend: "growing" }
Express integration
import express from "express"
import Piscina from "piscina"
const pool = new Piscina({
filename: resolve(__dirname, "workers/report-generator.js"),
maxThreads: 4,
})
const app = express()
app.post("/api/reports/generate", async (req, res) => {
try {
// Offload CPU work to pool — event loop stays free:
const report = await pool.run({
packages: req.body.packages,
dateRange: req.body.dateRange,
format: "pdf",
})
res.setHeader("Content-Type", "application/pdf")
res.send(report)
} catch (error) {
res.status(500).json({ error: "Report generation failed" })
}
})
Task cancellation with AbortController
const controller = new AbortController()
// Cancel after 10 seconds:
const timeout = setTimeout(() => controller.abort(), 10_000)
try {
const result = await pool.run(data, { signal: controller.signal })
clearTimeout(timeout)
return result
} catch (error) {
if (error.name === "AbortError") {
console.log("Task cancelled — took too long")
}
throw error
}
Transferable objects (zero-copy)
// worker.ts:
export default function processImage(buffer: ArrayBuffer) {
// Process the buffer (no copy was made — ownership transferred)
const view = new Uint8Array(buffer)
// ... process image data ...
return { width: 800, height: 600, size: view.length }
}
// main.ts:
const imageBuffer = await fs.readFile("image.png")
const arrayBuffer = imageBuffer.buffer.slice(
imageBuffer.byteOffset,
imageBuffer.byteOffset + imageBuffer.byteLength
)
// Transfer ownership (zero-copy — buffer moved, not copied):
const result = await pool.run(arrayBuffer, {
transferList: [arrayBuffer],
})
// arrayBuffer is now detached (empty) — ownership transferred to worker
Pool statistics
// piscina exposes histogram metrics:
console.log({
completed: pool.completed, // Total completed tasks
queueSize: pool.queueSize, // Tasks waiting in queue
utilization: pool.utilization, // 0-1 utilization ratio
runTime: pool.runTime, // Histogram of task run times
waitTime: pool.waitTime, // Histogram of queue wait times
})
tinypool
tinypool — minimal piscina fork:
Basic setup
import Tinypool from "tinypool"
const pool = new Tinypool({
filename: new URL("./worker.js", import.meta.url).href, // ESM-native
minThreads: 2,
maxThreads: 4,
idleTimeout: 30_000,
})
const result = await pool.run({ name: "react", downloads: [5_000_000] })
Worker (ESM)
// worker.ts (ESM):
export default function ({ name, downloads }: { name: string; downloads: number[] }) {
const avg = downloads.reduce((a, b) => a + b, 0) / downloads.length
return { name, average: avg }
}
Why Vitest uses tinypool
// Vitest runs each test file in a separate worker thread via tinypool:
// - Test files run in parallel across workers
// - Each worker has its own module cache (isolation)
// - Main thread coordinates results
// This is why:
// vitest --pool=threads → tinypool (default)
// vitest --pool=forks → child_process.fork
// vitest --pool=vmThreads → worker_threads + VM context
Differences from piscina
tinypool vs piscina:
✅ ESM-native (import.meta.url for worker path)
✅ Smaller bundle (~2 KB vs ~8 KB)
✅ Same core API (run, options, abort)
❌ No histogram metrics (pool.runTime, pool.waitTime)
❌ No transferList support (manual — pass via run options)
❌ Less configuration (simpler = fewer knobs)
workerpool
workerpool — cross-environment pool:
Named functions
// worker.ts — export named functions:
import workerpool from "workerpool"
function calculateScore(name: string, downloads: number[]): number {
const avg = downloads.reduce((a, b) => a + b, 0) / downloads.length
return Math.round(avg / 1000)
}
function generateReport(packages: string[]): string {
return packages.map(p => `Report for ${p}`).join("\n")
}
// Register functions by name:
workerpool.worker({
calculateScore,
generateReport,
})
// main.ts — call by name:
import workerpool from "workerpool"
const pool = workerpool.pool("./worker.js", {
maxWorkers: 4,
minWorkers: 2,
})
// Call specific named functions:
const score = await pool.exec("calculateScore", ["react", [5_000_000, 5_100_000]])
const report = await pool.exec("generateReport", [["react", "vue", "svelte"]])
// Pool statistics:
const stats = pool.stats()
// → { totalWorkers: 4, busyWorkers: 2, idleWorkers: 2, pendingTasks: 0, activeTasks: 2 }
// Terminate pool:
await pool.terminate()
Browser support (Web Workers)
// workerpool works in the browser using Web Workers:
import workerpool from "workerpool"
// Browser — uses Web Workers automatically:
const pool = workerpool.pool()
// Inline function execution (no separate worker file):
const result = await pool.exec((a: number, b: number) => a + b, [2, 3])
// → 5
// Or with a worker URL:
const pool2 = workerpool.pool("/workers/processor.js")
Timeout and cancellation
const pool = workerpool.pool("./worker.js")
// Timeout:
try {
const result = await pool.exec("generateReport", [largeDataset], {
on: function (payload) {
console.log("Progress:", payload) // Worker can send progress updates
},
})
} catch (error) {
// Handle worker errors
}
// Cancel via promise:
const promise = pool.exec("longRunningTask", [data])
promise.cancel() // Terminates the worker running this task
Feature Comparison
| Feature | piscina | tinypool | workerpool |
|---|---|---|---|
| Worker threads | ✅ | ✅ | ✅ |
| Web Workers (browser) | ❌ | ❌ | ✅ |
| ESM native | ✅ | ✅ (better) | ⚠️ |
| Named functions | ❌ | ❌ | ✅ |
| AbortController | ✅ | ✅ | ❌ (cancel()) |
| Transferable objects | ✅ | ⚠️ | ❌ |
| Histogram metrics | ✅ | ❌ | ❌ |
| Pool statistics | ✅ | ✅ | ✅ |
| Inline execution | ❌ | ❌ | ✅ |
| Worker recycling | ✅ | ✅ | ✅ |
| TypeScript | ✅ | ✅ | ✅ |
| Weekly downloads | ~3M | ~8M | ~3M |
Thread Pool Sizing
import { availableParallelism } from "node:os"
const cpus = availableParallelism() // e.g., 8
// CPU-bound tasks (image processing, crypto):
const pool = new Piscina({
maxThreads: cpus, // One thread per CPU core
minThreads: cpus / 2, // Keep half alive
})
// Mixed I/O + CPU tasks:
const pool = new Piscina({
maxThreads: cpus * 2, // More threads than cores (I/O waits)
minThreads: cpus / 2,
})
// Memory-constrained (each worker uses ~50 MB):
const maxByMemory = Math.floor(totalMemoryMB / 50)
const pool = new Piscina({
maxThreads: Math.min(cpus, maxByMemory),
})
When to Use Each
Choose piscina if:
- Production Node.js server offloading CPU work
- Need transferable objects for zero-copy data passing
- Want histogram metrics for monitoring task performance
- Built by the Node.js team — battle-tested
Choose tinypool if:
- Want the lightest worker pool option (~2 KB)
- ESM-first project
- Building tools similar to Vitest that need parallel execution
- Don't need histogram metrics or transferable objects
Choose workerpool if:
- Need browser support — Web Workers alongside Node.js worker threads
- Prefer named function execution (
pool.exec("functionName", args)) - Want inline execution without separate worker files
- Cross-environment library or isomorphic application
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on piscina v4.x, tinypool v1.x, and workerpool v9.x.
Compare concurrency and worker thread packages on PkgPulse →