Web Workers vs Worker Threads vs SharedArrayBuffer: Concurrency in JavaScript (2026)
TL;DR
JavaScript is single-threaded — but you can escape the main thread when you need CPU-intensive work. Web Workers run scripts on background threads in the browser, keeping the UI responsive. Worker Threads (Node.js) run JavaScript in parallel threads for CPU-bound tasks. SharedArrayBuffer enables true shared memory between threads, replacing message-passing overhead. Use workers for heavy computation, not I/O (which is already async in JavaScript).
Key Takeaways
- JavaScript's event loop handles I/O concurrently without threads — workers are for CPU work
- Web Workers: Browser-only, isolated global scope, postMessage communication
- Worker Threads: Node.js parallel JavaScript execution, faster than child_process
- SharedArrayBuffer: Shared memory between workers — use with Atomics for synchronization
- Common use cases: image processing, file parsing, cryptography, ML inference, search indexing
- Avoid for: database queries, HTTP requests — those are already async without threads
When Workers Are Actually Needed
// ❌ Workers NOT needed — already async, doesn't block main thread:
const data = await fetch("https://api.example.com/data") // I/O, non-blocking
const user = await db.user.findUnique({ where: { id } }) // I/O, non-blocking
const hash = await bcrypt.hash(password, 10) // I/O over libuv
// ✅ Workers ARE needed — CPU-bound, blocks main thread:
const result = parseCSV(tenMegabyteString) // CPU-bound
const compressed = zlib.gzipSync(largeBuffer) // CPU-bound (sync variant)
const hash = sha256OfLargeFile(fileBuffer) // CPU-bound
const matches = fuseSearch(largeIndex, query) // CPU-bound
Web Workers (Browser)
Web Workers run on separate OS threads in the browser:
// worker.ts (separate file compiled separately):
// No access to: DOM, window, document, localStorage
// Has access to: fetch, WebSockets, IndexedDB, Crypto API, setTimeout
self.onmessage = (event: MessageEvent) => {
const { type, data } = event.data
switch (type) {
case "parseCSV": {
// Heavy CPU work — won't freeze the UI:
const rows = parseCSVData(data.content)
self.postMessage({ type: "parsedCSV", rows, count: rows.length })
break
}
case "calculateStats": {
const stats = computeStatistics(data.numbers)
self.postMessage({ type: "statsResult", stats })
break
}
}
}
function parseCSVData(content: string) {
// ... complex parsing that takes 500ms+
return rows
}
// main.ts (browser):
const worker = new Worker(new URL("./worker.ts", import.meta.url), {
type: "module", // ES modules in workers (modern browsers)
})
// Send work to the worker:
worker.postMessage({ type: "parseCSV", data: { content: csvString } })
// Receive results:
worker.onmessage = (event: MessageEvent) => {
const { type, rows, count } = event.data
if (type === "parsedCSV") {
setRows(rows)
console.log(`Parsed ${count} rows`)
}
}
worker.onerror = (error) => {
console.error("Worker error:", error)
}
// Terminate when done:
// worker.terminate()
Transferable objects (zero-copy message passing):
// Default postMessage copies data — slow for large ArrayBuffers:
const largeBuffer = new ArrayBuffer(50 * 1024 * 1024) // 50MB
worker.postMessage(largeBuffer) // Copies 50MB — slow!
// Transfer ownership (no copy — instant):
worker.postMessage(largeBuffer, [largeBuffer])
// largeBuffer is now owned by the worker — accessing it in main thread throws
// Worker receives it instantly and can use it:
self.onmessage = (e) => {
const buffer = e.data // Full 50MB instantly transferred
const view = new Uint8Array(buffer)
// process...
// Transfer back when done:
self.postMessage(processedBuffer, [processedBuffer])
}
Worker Pool (comlink + concurrent work):
import * as Comlink from "comlink"
// worker.ts:
const api = {
async processImage(imageData: ImageData, options: ProcessOptions): Promise<ImageData> {
return applyFilters(imageData, options)
},
async calculateHash(data: ArrayBuffer): Promise<string> {
return sha256(data)
},
}
Comlink.expose(api)
export type WorkerAPI = typeof api
// main.ts — use workers like regular async functions:
import * as Comlink from "comlink"
import type { WorkerAPI } from "./worker"
const worker = Comlink.wrap<WorkerAPI>(
new Worker(new URL("./worker.ts", import.meta.url), { type: "module" })
)
// Looks like a regular async call — runs in worker:
const processedImage = await worker.processImage(imageData, { brightness: 1.2 })
const hash = await worker.calculateHash(fileBuffer)
Worker Threads (Node.js)
worker_threads is Node.js's built-in threading module:
// heavy-task.js — runs in worker thread:
import { parentPort, workerData } from "worker_threads"
// workerData: data passed when creating the worker (read-only):
const { csvContent, options } = workerData
// Do the heavy work:
const result = parseAndAnalyzeCSV(csvContent, options)
// Send result back:
parentPort?.postMessage({ status: "done", result })
// main.ts — create and use worker threads:
import { Worker, isMainThread, parentPort, workerData } from "worker_threads"
import { resolve } from "path"
function runWorker<T>(workerFile: string, data: unknown): Promise<T> {
return new Promise((resolve, reject) => {
const worker = new Worker(workerFile, { workerData: data })
worker.on("message", resolve)
worker.on("error", reject)
worker.on("exit", (code) => {
if (code !== 0) reject(new Error(`Worker exited with code ${code}`))
})
})
}
// Usage:
const result = await runWorker<ParseResult>(
resolve(__dirname, "./heavy-task.js"),
{ csvContent: fileContent, options: { delimiter: ",", header: true } }
)
Worker thread pool with Piscina:
import Piscina from "piscina"
// Create a pool of worker threads (reuses threads between tasks):
const pool = new Piscina({
filename: resolve(__dirname, "./worker.js"),
maxThreads: Math.max(1, require("os").cpus().length - 1),
minThreads: 2,
idleTimeout: 60000, // Destroy idle threads after 60s
})
// Submit tasks to the pool:
const results = await Promise.all(
filePaths.map((filepath) =>
pool.run({ filepath }, { name: "processFile" })
)
)
// Worker file (worker.js):
module.exports = {
async processFile({ filepath }) {
const content = await fs.readFile(filepath, "utf-8")
return analyzeContent(content)
}
}
Worker threads performance gains:
// Sequential (blocks for each file):
for (const file of largeFiles) {
results.push(await processFile(file)) // 4 files × 2s each = 8s total
}
// Parallel with worker pool (4 CPU cores):
const results = await Promise.all(
largeFiles.map((file) => pool.run({ file })) // ~2s total (parallel)
)
SharedArrayBuffer
SharedArrayBuffer enables true shared memory — zero-copy, no message passing:
// Main thread creates shared buffer:
const sharedBuffer = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 1000)
const sharedArray = new Int32Array(sharedBuffer)
// Initialize data:
for (let i = 0; i < 1000; i++) {
sharedArray[i] = i
}
// Pass to workers — no copying:
worker1.postMessage({ sharedBuffer, range: [0, 499] })
worker2.postMessage({ sharedBuffer, range: [500, 999] })
// Workers read/write the SAME memory:
// worker.ts:
self.onmessage = (e) => {
const { sharedBuffer, range } = e.data
const shared = new Int32Array(sharedBuffer) // Points to same memory!
const [start, end] = range
for (let i = start; i <= end; i++) {
shared[i] *= 2 // Write directly to shared memory
}
self.postMessage("done")
}
Atomics — safe concurrent access:
// Without Atomics: race condition (two workers may read-modify-write simultaneously)
// With Atomics: atomic (indivisible) operations
const counter = new Int32Array(new SharedArrayBuffer(4))
// Worker 1:
Atomics.add(counter, 0, 1) // Atomically increment — thread-safe
// Worker 2 (simultaneous):
Atomics.add(counter, 0, 1) // Also safe — no race condition
// Final value is always 2 (not 1 from a race)
console.log(Atomics.load(counter, 0)) // 2
// Wait/notify — thread synchronization:
// Worker thread waits until main thread signals:
Atomics.wait(sharedInt32, 0, 0) // Block until index 0 ≠ 0
// Main thread signals the worker:
Atomics.store(sharedInt32, 0, 1) // Write 1
Atomics.notify(sharedInt32, 0, 1) // Wake up 1 waiting thread
SharedArrayBuffer security requirement:
<!-- SharedArrayBuffer requires cross-origin isolation headers: -->
<!-- COOP + COEP headers must be set on your server: -->
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
<!-- Verify in browser: -->
<!-- console.log(crossOriginIsolated) // Must be true -->
Real-World Use Cases
Large File Processing (Worker Threads)
// Process 10GB CSV in parallel chunks:
const CHUNK_SIZE = 10 * 1024 * 1024 // 10MB chunks
const pool = new Piscina({ filename: "./csv-worker.js", maxThreads: 4 })
async function processLargeCSV(filepath: string) {
const stat = await fs.stat(filepath)
const chunks = Math.ceil(stat.size / CHUNK_SIZE)
const results = await Promise.all(
Array.from({ length: chunks }, (_, i) =>
pool.run({ filepath, offset: i * CHUNK_SIZE, size: CHUNK_SIZE })
)
)
return results.flat()
}
Image Processing (Web Workers)
// Process uploaded images without blocking UI:
const imageWorker = Comlink.wrap<ImageAPI>(
new Worker(new URL("./image-worker.ts", import.meta.url), { type: "module" })
)
async function handleImageUpload(file: File) {
const buffer = await file.arrayBuffer()
// All processing happens off main thread:
const processed = await imageWorker.resize(buffer, { width: 800, height: 600 })
const webp = await imageWorker.convertToWebP(processed, { quality: 85 })
const thumbnail = await imageWorker.resize(webp, { width: 150, height: 150 })
return { original: webp, thumbnail }
}
When to Use Each
Use Web Workers (browser) for:
- Image/video processing in the browser
- Large CSV/JSON parsing
- Cryptographic operations (key generation, signing)
- ML model inference (TensorFlow.js, ONNX)
- Search index building (Fuse.js, FlexSearch)
- Keeping the UI responsive during heavy computation
Use Worker Threads (Node.js) for:
- CPU-bound data transformation pipelines
- Image processing on the server (Sharp in a pool)
- Cryptography at scale (bcrypt/Argon2 parallel hashing)
- Large file analysis (parsing, compression, diffing)
- AI/ML inference tasks
Use SharedArrayBuffer for:
- Zero-copy data sharing between multiple workers
- Lock-free data structures (ring buffers for real-time data)
- WASM modules that need shared memory
- High-performance numerical computing
Don't use workers for:
- Database queries (already async via event loop)
- HTTP requests (already async)
- File reads/writes (already async via libuv)
- Anything that's I/O-bound (workers don't help, just add overhead)
Methodology
Feature comparison based on Web Workers spec (WHATWG), Node.js worker_threads documentation (v22), and SharedArrayBuffer/Atomics ECMAScript specification. Performance examples are representative of typical workloads.