Skip to main content

Guide

Web Workers vs Worker Threads vs SharedArrayBuffer 2026

Compare Web Workers, Worker Threads, and SharedArrayBuffer for JavaScript concurrency. Event loop, memory sharing, COOP/COEP headers, and worker pool patterns.

·PkgPulse Team·
0

TL;DR

JavaScript is single-threaded — but you can escape the main thread when you need CPU-intensive work. Web Workers run scripts on background threads in the browser, keeping the UI responsive. Worker Threads (Node.js) run JavaScript in parallel threads for CPU-bound tasks. SharedArrayBuffer enables true shared memory between threads, replacing message-passing overhead with direct memory access. Use workers for heavy computation, not I/O — which is already async in JavaScript.

Key Takeaways

  • JavaScript's event loop handles I/O concurrently without threads — workers are for CPU work
  • Web Workers: Browser-only, isolated global scope, postMessage communication
  • Worker Threads: Node.js parallel JavaScript execution, faster than child_process
  • SharedArrayBuffer: Shared memory between workers — use with Atomics for synchronization
  • Common use cases: image processing, file parsing, cryptography, ML inference, search indexing
  • Avoid for: database queries, HTTP requests — those are already async without threads

Quick Comparison

FeatureWeb WorkersWorker ThreadsSharedArrayBuffer
EnvironmentBrowser onlyNode.js onlyBoth (with COOP/COEP in browser)
Use caseUI offloading, browser CPU workServer CPU-bound tasksZero-copy memory sharing
CommunicationpostMessage (copies data)postMessage + workerDataShared memory (no copy)
IsolationFull (separate global scope)Full (separate V8 context)Partial (shared buffer, isolated JS)
Startup cost~5-30ms~5-20msNegligible (just a buffer)
Pool libraryComlinkPiscinaN/A (used with either)
DOM accessN/AN/A
Module systemES modules (modern browsers)CommonJS + ESMN/A
DebuggingChrome DevTools worker panel--inspect with worker portAtomics.load inspection
Browser requirementAll modern browsersN/ACOOP + COEP headers required

The JavaScript Event Loop and Why Threads Exist

JavaScript was designed for the browser, where a single-threaded model made DOM manipulation safe by eliminating concurrent modification races. The event loop — the mechanism that lets JavaScript handle many concurrent operations with one thread — works by delegating waiting to the operating system. When you call fetch(), the event loop dispatches the HTTP request to the OS network stack and registers a callback. While the network request is in flight, the event loop picks up other tasks: UI events, timers, other pending callbacks. When the response arrives, the OS notifies the event loop, which queues the callback for execution on the next tick.

This model is extraordinarily efficient for I/O-heavy workloads. A Node.js web server with a single thread can handle thousands of simultaneous requests because most of each request's lifetime is I/O wait — the database is processing, the network is transporting bytes, the filesystem is reading blocks. The event loop's single thread is almost never the bottleneck when the work is fundamentally about waiting.

The limitation surfaces when JavaScript code needs to do genuine CPU computation: parsing a 50MB CSV file, applying a blur filter to a 4K image, running a machine learning inference model, compressing a video, or sorting millions of records with a complex comparator. These operations keep the V8 thread continuously busy executing JavaScript opcodes. While they run, the event loop is frozen — no new requests are dispatched, no timers fire, no UI events are processed. In a browser, this causes jank: the page stops responding to user input. In a Node.js server, it causes a stall visible to every concurrent user.

Workers solve this by creating additional threads, each with its own V8 instance, its own JavaScript global scope, and its own event loop. CPU work moved to a worker thread runs in parallel with the main thread's event loop — the main thread continues handling I/O and UI events while the worker churns through computation. Communication between threads happens through message passing or shared memory rather than shared JavaScript state, which avoids the race conditions that plague multi-threaded systems in languages like Java and C++.

The design trade-off is intentional: JavaScript within any single worker remains single-threaded and race-condition-free. Parallelism is achieved through isolation, not shared mutable state. This makes worker-based JavaScript far easier to reason about than traditional multi-threaded programming, at the cost of message-passing overhead for data exchange.


When Workers Are Actually Needed

// ❌ Workers NOT needed — already async, doesn't block main thread:
const data = await fetch("https://api.example.com/data")  // I/O, non-blocking
const user = await db.user.findUnique({ where: { id } })  // I/O, non-blocking
const hash = await bcrypt.hash(password, 10)              // I/O over libuv

// ✅ Workers ARE needed — CPU-bound, blocks main thread:
const result = parseCSV(tenMegabyteString)           // CPU-bound
const compressed = zlib.gzipSync(largeBuffer)        // CPU-bound (sync variant)
const hash = sha256OfLargeFile(fileBuffer)           // CPU-bound
const matches = fuseSearch(largeIndex, query)        // CPU-bound

Web Workers (Browser)

Web Workers run on separate OS threads in the browser:

// worker.ts (separate file compiled separately):
// No access to: DOM, window, document, localStorage
// Has access to: fetch, WebSockets, IndexedDB, Crypto API, setTimeout

self.onmessage = (event: MessageEvent) => {
  const { type, data } = event.data

  switch (type) {
    case "parseCSV": {
      // Heavy CPU work — won't freeze the UI:
      const rows = parseCSVData(data.content)
      self.postMessage({ type: "parsedCSV", rows, count: rows.length })
      break
    }

    case "calculateStats": {
      const stats = computeStatistics(data.numbers)
      self.postMessage({ type: "statsResult", stats })
      break
    }
  }
}

function parseCSVData(content: string) {
  // ... complex parsing that takes 500ms+
  return rows
}
// main.ts (browser):
const worker = new Worker(new URL("./worker.ts", import.meta.url), {
  type: "module",  // ES modules in workers (modern browsers)
})

// Send work to the worker:
worker.postMessage({ type: "parseCSV", data: { content: csvString } })

// Receive results:
worker.onmessage = (event: MessageEvent) => {
  const { type, rows, count } = event.data
  if (type === "parsedCSV") {
    setRows(rows)
    console.log(`Parsed ${count} rows`)
  }
}

worker.onerror = (error) => {
  console.error("Worker error:", error)
}

// Terminate when done:
// worker.terminate()

Transferable objects (zero-copy message passing):

// Default postMessage copies data — slow for large ArrayBuffers:
const largeBuffer = new ArrayBuffer(50 * 1024 * 1024)  // 50MB
worker.postMessage(largeBuffer)  // Copies 50MB — slow!

// Transfer ownership (no copy — instant):
worker.postMessage(largeBuffer, [largeBuffer])
// largeBuffer is now owned by the worker — accessing it in main thread throws

// Worker receives it instantly and can use it:
self.onmessage = (e) => {
  const buffer = e.data  // Full 50MB instantly transferred
  const view = new Uint8Array(buffer)
  // process...
  // Transfer back when done:
  self.postMessage(processedBuffer, [processedBuffer])
}

Worker Pool (comlink + concurrent work):

import * as Comlink from "comlink"

// worker.ts:
const api = {
  async processImage(imageData: ImageData, options: ProcessOptions): Promise<ImageData> {
    return applyFilters(imageData, options)
  },
  async calculateHash(data: ArrayBuffer): Promise<string> {
    return sha256(data)
  },
}

Comlink.expose(api)
export type WorkerAPI = typeof api

// main.ts — use workers like regular async functions:
import * as Comlink from "comlink"
import type { WorkerAPI } from "./worker"

const worker = Comlink.wrap<WorkerAPI>(
  new Worker(new URL("./worker.ts", import.meta.url), { type: "module" })
)

// Looks like a regular async call — runs in worker:
const processedImage = await worker.processImage(imageData, { brightness: 1.2 })
const hash = await worker.calculateHash(fileBuffer)

Worker Threads (Node.js)

worker_threads is Node.js's built-in threading module:

// heavy-task.js — runs in worker thread:
import { parentPort, workerData } from "worker_threads"

// workerData: data passed when creating the worker (read-only):
const { csvContent, options } = workerData

// Do the heavy work:
const result = parseAndAnalyzeCSV(csvContent, options)

// Send result back:
parentPort?.postMessage({ status: "done", result })
// main.ts — create and use worker threads:
import { Worker, isMainThread, parentPort, workerData } from "worker_threads"
import { resolve } from "path"

function runWorker<T>(workerFile: string, data: unknown): Promise<T> {
  return new Promise((resolve, reject) => {
    const worker = new Worker(workerFile, { workerData: data })

    worker.on("message", resolve)
    worker.on("error", reject)
    worker.on("exit", (code) => {
      if (code !== 0) reject(new Error(`Worker exited with code ${code}`))
    })
  })
}

// Usage:
const result = await runWorker<ParseResult>(
  resolve(__dirname, "./heavy-task.js"),
  { csvContent: fileContent, options: { delimiter: ",", header: true } }
)

Worker thread pool with Piscina:

import Piscina from "piscina"

// Create a pool of worker threads (reuses threads between tasks):
const pool = new Piscina({
  filename: resolve(__dirname, "./worker.js"),
  maxThreads: Math.max(1, require("os").cpus().length - 1),
  minThreads: 2,
  idleTimeout: 60000,  // Destroy idle threads after 60s
})

// Submit tasks to the pool:
const results = await Promise.all(
  filePaths.map((filepath) =>
    pool.run({ filepath }, { name: "processFile" })
  )
)

// Worker file (worker.js):
module.exports = {
  async processFile({ filepath }) {
    const content = await fs.readFile(filepath, "utf-8")
    return analyzeContent(content)
  }
}

Worker threads performance gains:

// Sequential (blocks for each file):
for (const file of largeFiles) {
  results.push(await processFile(file))  // 4 files × 2s each = 8s total
}

// Parallel with worker pool (4 CPU cores):
const results = await Promise.all(
  largeFiles.map((file) => pool.run({ file }))  // ~2s total (parallel)
)

SharedArrayBuffer

SharedArrayBuffer enables true shared memory — zero-copy, no message passing:

// Main thread creates shared buffer:
const sharedBuffer = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 1000)
const sharedArray = new Int32Array(sharedBuffer)

// Initialize data:
for (let i = 0; i < 1000; i++) {
  sharedArray[i] = i
}

// Pass to workers — no copying:
worker1.postMessage({ sharedBuffer, range: [0, 499] })
worker2.postMessage({ sharedBuffer, range: [500, 999] })

// Workers read/write the SAME memory:
// worker.ts:
self.onmessage = (e) => {
  const { sharedBuffer, range } = e.data
  const shared = new Int32Array(sharedBuffer)  // Points to same memory!

  const [start, end] = range
  for (let i = start; i <= end; i++) {
    shared[i] *= 2  // Write directly to shared memory
  }

  self.postMessage("done")
}

Atomics — safe concurrent access:

// Without Atomics: race condition (two workers may read-modify-write simultaneously)
// With Atomics: atomic (indivisible) operations

const counter = new Int32Array(new SharedArrayBuffer(4))

// Worker 1:
Atomics.add(counter, 0, 1)  // Atomically increment — thread-safe

// Worker 2 (simultaneous):
Atomics.add(counter, 0, 1)  // Also safe — no race condition

// Final value is always 2 (not 1 from a race)
console.log(Atomics.load(counter, 0))  // 2

// Wait/notify — thread synchronization:
// Worker thread waits until main thread signals:
Atomics.wait(sharedInt32, 0, 0)  // Block until index 0 ≠ 0

// Main thread signals the worker:
Atomics.store(sharedInt32, 0, 1)   // Write 1
Atomics.notify(sharedInt32, 0, 1)  // Wake up 1 waiting thread

SharedArrayBuffer security requirement:

<!-- SharedArrayBuffer requires cross-origin isolation headers: -->
<!-- COOP + COEP headers must be set on your server: -->
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp

<!-- Verify in browser: -->
<!-- console.log(crossOriginIsolated) // Must be true -->

SharedArrayBuffer Security: COOP/COEP Explained

SharedArrayBuffer was disabled in all browsers in early 2018 following the Spectre and Meltdown CPU vulnerability disclosures. The concern was that SharedArrayBuffer's high-resolution timer (via Atomics.wait) could be used to mount Spectre-style side-channel attacks against data from other origins in the same browser process. The browser's same-origin policy isolates web content by origin, but the underlying CPU cache is shared between all processes on the machine — meaning a malicious page could potentially extract data from cross-origin iframes using timing measurements.

The solution was cross-origin isolation: a new security model that ensures a page's process doesn't share CPU resources with cross-origin content. Two HTTP headers implement this:

Cross-Origin-Opener-Policy (COOP): Set to same-origin on your document. This prevents cross-origin windows (opened via window.open() or links with target="_blank") from sharing a browsing context group with your page. The result is that malicious cross-origin pages cannot get a reference to your window object and cannot time attacks against it.

Cross-Origin-Embedder-Policy (COEP): Set to require-corp on your document. This requires that every resource loaded by your page — images, scripts, iframes, fonts — either is same-origin or explicitly opts in with a Cross-Origin-Resource-Policy: cross-origin header. This prevents your page from embedding cross-origin resources without their consent, closing the attack vector where your page could be used to time-measure other origins' resources.

Together, these headers establish a strict isolation boundary. Once both are set and the browser verifies the constraints are met, crossOriginIsolated becomes true and SharedArrayBuffer is re-enabled. The cost is that any cross-origin resource your app embeds must add the CORP opt-in header — which means third-party CDN resources, Google Fonts, ad scripts, and analytics pixels need to be handled carefully. Common solutions include self-hosting previously third-party resources, using a service worker to intercept and add headers to responses, or using a proxy.

// Check cross-origin isolation status:
if (!crossOriginIsolated) {
  console.warn("SharedArrayBuffer unavailable — COOP/COEP headers not set")
  // Fall back to postMessage with structured clone
}

// Only available when crossOriginIsolated is true:
const sab = new SharedArrayBuffer(1024)

WASM Integration with Workers

WebAssembly modules pair naturally with SharedArrayBuffer because WASM's linear memory model maps directly onto typed arrays. A WASM module compiles to a WebAssembly.Memory object, which is backed by an ArrayBuffer — and in multi-threaded WASM (compiled with the --threads flag in Emscripten or --features shared-memory in Rust/wasm-pack), that memory object can be a SharedArrayBuffer.

This combination enables near-native parallelism: a Rust or C++ library compiled to WASM can use its own threading primitives (pthreads in C/C++, Rayon in Rust), and those threads are mapped to Web Workers by the WASM runtime. The WASM module manages its own thread pool using shared memory, and from JavaScript's perspective you're just calling an async function that happens to run in parallel internally.

// Loading a multi-threaded WASM module (Rust/wasm-bindgen with --features):
import init, { process_image_parallel } from "./image_processor.wasm"

const wasm = await init()

// Under the hood, this spawns Web Workers using SharedArrayBuffer:
const result = await process_image_parallel(imageBuffer, {
  width: 1920,
  height: 1080,
  filter: "sharpen",
})

For single-threaded WASM modules used in workers — the more common case in 2026 — the benefit is simpler: the WASM computation runs on the worker's thread rather than the main thread, keeping the UI responsive. The WASM module doesn't need to be compiled with thread support; it just runs in an isolated worker context.


Memory Limits and Transfer Costs

Each Web Worker and Worker Thread has its own V8 heap, separate from the main thread's heap. Node.js defaults to 1.5GB per V8 heap on 64-bit systems, and each worker gets its own allocation. A Piscina pool with 8 workers and a 1GB-per-worker heap limit could theoretically use 8GB of memory plus the main process — though in practice, workers use memory proportional to the data they hold and the modules they import.

The structured clone algorithm that powers postMessage serializes and copies data between thread boundaries. For small messages (typical task arguments and results), this is negligible. For large data transfers, the cost compounds: copying a 100MB ArrayBuffer on each message in a tight loop adds up quickly. Measure before optimizing, but the thresholds where copying becomes noticeable are approximately: 1MB+ for frequent message passing, 10MB+ for occasional task data.

Transferable objects (ArrayBuffer, ImageBitmap, OffscreenCanvas, MessagePort, ReadableStream/WritableStream) can be transferred rather than copied. The transfer is nearly instant — the underlying memory is re-owned by the recipient thread without copying. The sender loses access to the transferred object immediately after the postMessage call. SharedArrayBuffer sidesteps the entire question by creating memory that all threads can access without any transfer at all.


For Node.js worker thread pools, Piscina is the standard choice. It manages a configurable-size pool of worker threads, queues tasks when all threads are busy, handles worker crashes and restarts, provides back-pressure metrics, and supports named task exports. Piscina is production-battle-tested in large codebases and has TypeScript support out of the box.

Comlink is the standard for browser Web Workers, transforming the awkward postMessage API into a transparent async proxy. Where Piscina focuses on pool management and throughput, Comlink focuses on ergonomics — making worker calls look like regular function calls. Comlink can be used in Node.js as well, though it's less common there.

For simple cases where you need a browser worker pool without Comlink's proxy abstraction, maintaining an array of Worker instances and cycling through them round-robin is sufficient:

// Simple browser worker pool without library:
const POOL_SIZE = navigator.hardwareConcurrency || 4
const pool = Array.from({ length: POOL_SIZE }, () =>
  new Worker(new URL("./worker.ts", import.meta.url), { type: "module" })
)
let poolIndex = 0

function runOnWorker(task: WorkerTask): Promise<WorkerResult> {
  return new Promise((resolve, reject) => {
    const worker = pool[poolIndex++ % POOL_SIZE]
    const id = Math.random().toString(36).slice(2)

    const handler = (e: MessageEvent) => {
      if (e.data.id !== id) return
      worker.removeEventListener("message", handler)
      e.data.error ? reject(new Error(e.data.error)) : resolve(e.data.result)
    }

    worker.addEventListener("message", handler)
    worker.postMessage({ ...task, id })
  })
}

Debugging Worker Code

Debugging worker threads and Web Workers is more involved than debugging main-thread code because each worker has its own execution context, and errors thrown in workers don't propagate to the main thread by default.

In Node.js, worker threads expose a debugging port: start your application with node --inspect, then for each worker thread add --inspect-brk=0 to the worker options:

const worker = new Worker("./heavy-task.js", {
  workerData: data,
  // In development, attach a debugger to each worker:
  ...(process.env.NODE_ENV === "development" && {
    execArgv: ["--inspect-brk=0"],  // 0 = random available port
  }),
})

Chrome DevTools automatically detects worker threads when attached to a Node.js process — they appear as separate targets in the DevTools connections panel. For browser Web Workers, Chrome DevTools shows them under "Workers" in the Sources panel.

For production debugging, structured logging from within workers is the practical approach. Workers in Node.js can use console.log normally (it routes to the main thread's stdout). In browser workers, console.log also works. The limitation is that stack traces from worker errors can be incomplete — use explicit error serialization:

// In worker — serialize errors for clean reporting:
try {
  const result = doHeavyWork(workerData)
  parentPort?.postMessage({ result })
} catch (error) {
  parentPort?.postMessage({
    error: {
      message: error.message,
      stack: error.stack,
      name: error.name,
    }
  })
}

Testing Worker-Based Code

Workers are difficult to test in isolation because they run in separate threads with their own module scope. The practical testing strategies depend on how your worker logic is organized.

The most testable pattern is separating the worker's computation logic from its messaging infrastructure. Export pure functions that perform the actual work, and have the worker's message handler call those functions. The pure functions are testable without any worker infrastructure:

// worker.ts:
export function processCSVChunk(content: string, options: CSVOptions): CSVRow[] {
  // Pure computation — no worker-specific APIs
  return parseRows(content, options)
}

// Worker messaging boilerplate (thin wrapper):
self.onmessage = (e) => {
  const result = processCSVChunk(e.data.content, e.data.options)
  self.postMessage(result)
}

// test/csv-processor.test.ts — tests the logic directly:
import { processCSVChunk } from "../src/worker"
it("parses CSV with headers", () => {
  const rows = processCSVChunk("name,age\nAlice,30", { header: true })
  expect(rows[0]).toEqual({ name: "Alice", age: "30" })
})

For testing the full worker integration (message passing, error handling, parallel execution), Vitest runs in Node.js where Worker from worker_threads is available. You can instantiate actual workers in tests:

// test/worker-integration.test.ts:
import { Worker } from "worker_threads"
import { resolve } from "path"

it("worker returns processed result", async () => {
  const result = await new Promise((res, rej) => {
    const w = new Worker(resolve(__dirname, "../src/worker.js"), {
      workerData: { content: "name,age\nAlice,30", options: { header: true } },
    })
    w.on("message", res)
    w.on("error", rej)
  })
  expect(result[0]).toEqual({ name: "Alice", age: "30" })
})

For browser Web Workers in Vitest's jsdom or happy-dom environment, the Worker constructor is not available — mock it using vi.mock or use the pure-function testing approach exclusively for unit tests, reserving worker integration tests for Playwright or actual browser testing.


Browser Support and Polyfill Considerations

Web Workers have been supported in all major browsers since 2009 and are safe to use without polyfills in any modern browser environment. The type: "module" option for ES module workers has broader support now (Chrome 80+, Firefox 114+, Safari 15+) but lacks full cross-browser parity for all dynamic import features within workers.

SharedArrayBuffer has a more complicated history. Disabled broadly in 2018 after Spectre, it was re-enabled in Chrome 92 (July 2021) behind COOP/COEP headers, in Firefox 79, and in Safari 15.2. The browser coverage in 2026 is excellent for modern browsers, but the COOP/COEP header requirement is a deployment concern rather than a compatibility concern — it's not about browser version but about server configuration.

For environments where SharedArrayBuffer is unavailable (no COOP/COEP headers, or older browsers), fall back to postMessage with transferable ArrayBuffers. The fallback is slightly slower due to transfer overhead but functionally equivalent for most use cases:

function createSharedOrCopied(size: number) {
  if (crossOriginIsolated) {
    // SharedArrayBuffer available — zero-copy sharing:
    return new SharedArrayBuffer(size)
  } else {
    // Fall back to transferable ArrayBuffer:
    return new ArrayBuffer(size)
  }
}

Real-World Use Cases

Large File Processing (Worker Threads)

// Process 10GB CSV in parallel chunks:
const CHUNK_SIZE = 10 * 1024 * 1024  // 10MB chunks
const pool = new Piscina({ filename: "./csv-worker.js", maxThreads: 4 })

async function processLargeCSV(filepath: string) {
  const stat = await fs.stat(filepath)
  const chunks = Math.ceil(stat.size / CHUNK_SIZE)

  const results = await Promise.all(
    Array.from({ length: chunks }, (_, i) =>
      pool.run({ filepath, offset: i * CHUNK_SIZE, size: CHUNK_SIZE })
    )
  )

  return results.flat()
}

Image Processing (Web Workers)

// Process uploaded images without blocking UI:
const imageWorker = Comlink.wrap<ImageAPI>(
  new Worker(new URL("./image-worker.ts", import.meta.url), { type: "module" })
)

async function handleImageUpload(file: File) {
  const buffer = await file.arrayBuffer()

  // All processing happens off main thread:
  const processed = await imageWorker.resize(buffer, { width: 800, height: 600 })
  const webp = await imageWorker.convertToWebP(processed, { quality: 85 })
  const thumbnail = await imageWorker.resize(webp, { width: 150, height: 150 })

  return { original: webp, thumbnail }
}

Performance Measurement: When Offloading Actually Helps

The question of whether moving work to a worker actually improves overall performance is empirical, not theoretical. The answer depends on three variables: how CPU-bound the task is, how long it takes relative to worker startup overhead, and whether you're using a persistent pool or creating new workers per task.

Worker startup overhead is the first filter: creating a new Worker instance typically takes 5-30ms depending on what the worker imports and the machine's disk speed. Any task that completes in less than 50ms on the main thread is unlikely to benefit from a fresh worker spawn — the overhead exceeds the parallelism gain. Persistent pools eliminate startup overhead, making even sub-50ms tasks beneficial to offload if they're called frequently enough to saturate the main thread.

The reliable measurement approach is benchmarking with performance.now() before and after both the main-thread version and the worker version:

// Measure whether a task actually benefits from a worker:
async function benchmark(taskFn: () => unknown, workerFn: () => Promise<unknown>) {
  const RUNS = 20

  // Main thread baseline:
  const mainTimes = []
  for (let i = 0; i < RUNS; i++) {
    const start = performance.now()
    taskFn()
    mainTimes.push(performance.now() - start)
  }

  // Worker version:
  const workerTimes = []
  for (let i = 0; i < RUNS; i++) {
    const start = performance.now()
    await workerFn()
    workerTimes.push(performance.now() - start)
  }

  const mainAvg = mainTimes.reduce((a, b) => a + b) / RUNS
  const workerAvg = workerTimes.reduce((a, b) => a + b) / RUNS

  console.log(`Main thread: ${mainAvg.toFixed(2)}ms avg`)
  console.log(`Worker: ${workerAvg.toFixed(2)}ms avg`)
  console.log(`Worker benefit: ${mainAvg > workerAvg ? "YES" : "PROBABLY NOT"}`)
}

Tasks that consistently show 2x+ speedup from workers: anything that takes 200ms+ on a single core and can be parallelized. Tasks that show marginal or negative speedup: anything under 50ms, pure I/O work that's already non-blocking, tasks with large input data that has to be copied via postMessage.


When to Use Each

Use Web Workers (browser) for:

  • Image/video processing in the browser
  • Large CSV/JSON parsing
  • Cryptographic operations (key generation, signing)
  • ML model inference (TensorFlow.js, ONNX)
  • Search index building (Fuse.js, FlexSearch)
  • Keeping the UI responsive during heavy computation

Use Worker Threads (Node.js) for:

  • CPU-bound data transformation pipelines
  • Image processing on the server (Sharp in a pool)
  • Cryptography at scale (bcrypt/Argon2 parallel hashing)
  • Large file analysis (parsing, compression, diffing)
  • AI/ML inference tasks

Use SharedArrayBuffer for:

  • Zero-copy data sharing between multiple workers
  • Lock-free data structures (ring buffers for real-time data)
  • WASM modules that need shared memory
  • High-performance numerical computing

Don't use workers for:

  • Database queries (already async via event loop)
  • HTTP requests (already async)
  • File reads/writes (already async via libuv)
  • Anything that's I/O-bound (workers don't help, just add overhead)

Common Mistakes When Using Workers

The most common mistake is using workers for I/O-bound operations instead of CPU-bound ones. Spawning a worker to make an HTTP request or query a database adds overhead without any benefit — the event loop already handles I/O non-blockingly. Worker startup overhead (thread creation, V8 context initialization, module loading) typically costs 5-50ms. For tasks that take less than 50ms total, spawning a new worker for each task often makes the overall system slower.

Forgetting to terminate workers is the most common memory leak vector in long-running applications. A browser Web Worker that's no longer needed but not terminated continues to exist, consuming memory and potentially CPU time. Always call worker.terminate() when the worker is done, or use a pool library that manages worker lifecycle automatically.

Copying large data unnecessarily: passing a 100MB ArrayBuffer to a worker copies all 100MB. Use transferable objects (worker.postMessage(buffer, [buffer])) or SharedArrayBuffer for large data. After a transfer, the main thread loses access to the buffer — design data flow accordingly.

Not handling worker errors: errors thrown inside a worker don't propagate to the main thread by default. An uncaught error in a worker logs to the console but doesn't reject the promise you're waiting on. Always attach an error event listener to workers, and always reject the wrapping Promise when a worker error occurs.

Creating workers in render loops or request handlers: never create a new Worker() inside a React render, an HTTP request handler, or any function called frequently. Worker creation is expensive and the workers will accumulate. Create workers at module initialization time or use a pool.


JavaScript's Concurrency Model: What the Event Loop Does and Doesn't Solve

JavaScript's event loop is optimized for I/O-concurrent applications. A Node.js server can handle thousands of simultaneous database queries, HTTP requests, and file reads with a single thread because these operations are non-blocking — the event loop dispatches the operation to the operating system and handles other work until the I/O completes.

The single-thread limitation becomes visible when your JavaScript code needs to do genuine CPU work: parsing a 10MB JSON file, compressing a video, running an ML inference model. These operations run synchronously on the V8 thread. While they execute, nothing else runs — no new HTTP requests are handled, no timers fire, no WebSocket messages are processed. Workers solve this by moving CPU-intensive work to separate OS threads, freeing the event loop to continue handling I/O while the CPU work happens in parallel.


Methodology

Feature comparison based on Web Workers spec (WHATWG), Node.js worker_threads documentation (v22), and SharedArrayBuffer/Atomics ECMAScript specification. Performance examples are representative of typical workloads.

Explore Node.js performance packages on PkgPulse →

See also: AVA vs Jest and ohash vs object-hash vs hash-wasm, acorn vs @babel/parser vs espree, supertest vs fastify.inject vs hono/testing.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.