Skip to main content

Guide

BullMQ vs Bee-Queue vs pg-boss 2026

BullMQ, Bee-Queue, and pg-boss compared for Node.js job queues in 2026. Redis vs PostgreSQL, delayed jobs, flows, retries, and which queue library to use.

·PkgPulse Team·
0

TL;DR

BullMQ is the full-featured Redis job queue — priorities, delayed jobs, rate limiting, repeatable jobs, flows/dependencies, the successor to Bull, most popular Node.js queue. Bee-Queue is the lightweight Redis queue — simple API, high throughput, minimal overhead, good for straightforward job processing. pg-boss is the PostgreSQL job queue — no Redis needed, uses SKIP LOCKED for safe concurrency, ACID guarantees, works with your existing Postgres. In 2026: BullMQ for feature-rich job processing, Bee-Queue for simple high-throughput queues, pg-boss for PostgreSQL-based queues without Redis.

Key Takeaways

  • BullMQ: ~500K weekly downloads — Redis, priorities, flows, rate limiting, repeatable
  • Bee-Queue: ~100K weekly downloads — Redis, simple, fast, lightweight
  • pg-boss: ~100K weekly downloads — PostgreSQL, ACID, no Redis, SKIP LOCKED
  • BullMQ has the richest feature set (flows, rate limiting, groups)
  • pg-boss eliminates Redis dependency — uses your existing Postgres
  • Bee-Queue is the simplest option for basic job processing

BullMQ

BullMQ — feature-rich Redis queue:

Basic queue

import { Queue, Worker } from "bullmq"

const connection = { host: "localhost", port: 6379 }

// Create queue:
const emailQueue = new Queue("emails", { connection })

// Add job:
await emailQueue.add("welcome-email", {
  to: "user@example.com",
  subject: "Welcome to PkgPulse!",
  template: "welcome",
})

// Process jobs:
const worker = new Worker("emails", async (job) => {
  console.log(`Sending email to ${job.data.to}`)
  await sendEmail(job.data)
  return { sent: true }
}, { connection })

// Events:
worker.on("completed", (job, result) => {
  console.log(`Job ${job.id} completed:`, result)
})

worker.on("failed", (job, err) => {
  console.error(`Job ${job?.id} failed:`, err.message)
})

Delayed and scheduled jobs

import { Queue } from "bullmq"

const queue = new Queue("tasks", { connection })

// Delayed job (run in 5 minutes):
await queue.add("reminder", { userId: "123" }, {
  delay: 5 * 60 * 1000,
})

// Repeatable job (every hour):
await queue.add("sync-data", {}, {
  repeat: {
    every: 60 * 60 * 1000,
  },
})

// Cron schedule:
await queue.add("daily-report", {}, {
  repeat: {
    pattern: "0 9 * * *",  // 9am daily
    tz: "America/New_York",
  },
})

// Priority (lower = higher priority):
await queue.add("urgent", { alert: true }, { priority: 1 })
await queue.add("normal", { data: "..." }, { priority: 5 })
await queue.add("low", { cleanup: true }, { priority: 10 })

Job flows (parent/child dependencies)

import { FlowProducer } from "bullmq"

const flow = new FlowProducer({ connection })

// Parent job depends on children:
await flow.add({
  name: "generate-report",
  queueName: "reports",
  data: { reportId: "monthly-2026-03" },
  children: [
    {
      name: "fetch-downloads",
      queueName: "data",
      data: { metric: "downloads" },
    },
    {
      name: "fetch-stars",
      queueName: "data",
      data: { metric: "stars" },
    },
    {
      name: "fetch-issues",
      queueName: "data",
      data: { metric: "issues" },
    },
  ],
})

// Parent runs only after all children complete

Rate limiting and retries

import { Queue, Worker } from "bullmq"

// Rate-limited worker:
const worker = new Worker("api-calls", async (job) => {
  await callExternalAPI(job.data)
}, {
  connection,
  limiter: {
    max: 10,        // Max 10 jobs
    duration: 1000, // Per second
  },
})

// Job with retry:
await queue.add("flaky-task", { url: "..." }, {
  attempts: 5,
  backoff: {
    type: "exponential",
    delay: 1000,  // 1s, 2s, 4s, 8s, 16s
  },
})

// Custom backoff:
await queue.add("custom-retry", {}, {
  attempts: 3,
  backoff: {
    type: "custom",
  },
})

// In worker:
const worker2 = new Worker("tasks", processor, {
  connection,
  settings: {
    backoffStrategy: (attemptsMade) => {
      return attemptsMade * 5000  // 5s, 10s, 15s
    },
  },
})

Bee-Queue

Bee-Queue — lightweight Redis queue:

Basic queue

import BeeQueue from "bee-queue"

// Create queue:
const queue = new BeeQueue("emails", {
  redis: { host: "localhost", port: 6379 },
  isWorker: true,
  removeOnSuccess: true,
  removeOnFailure: false,
})

// Add job:
const job = queue.createJob({
  to: "user@example.com",
  subject: "Welcome!",
  template: "welcome",
})

job.timeout(10000)     // 10s timeout
   .retries(3)         // 3 retries
   .backoff("exponential", 1000)
   .save()

// Process jobs:
queue.process(async (job) => {
  console.log(`Processing job ${job.id}:`, job.data)
  await sendEmail(job.data)
  return { sent: true, timestamp: Date.now() }
})

Concurrency

import BeeQueue from "bee-queue"

const queue = new BeeQueue("image-processing", {
  redis: { host: "localhost", port: 6379 },
})

// Process 5 jobs concurrently:
queue.process(5, async (job) => {
  const { imageUrl, width, height } = job.data

  // Report progress:
  job.reportProgress(10)
  const image = await downloadImage(imageUrl)

  job.reportProgress(50)
  const resized = await resizeImage(image, width, height)

  job.reportProgress(90)
  const url = await uploadImage(resized)

  job.reportProgress(100)
  return { url }
})

// Track progress:
const job = await queue.createJob({ imageUrl: "...", width: 800, height: 600 }).save()

job.on("progress", (progress) => {
  console.log(`Job ${job.id}: ${progress}%`)
})

job.on("succeeded", (result) => {
  console.log(`Done: ${result.url}`)
})

job.on("failed", (err) => {
  console.error(`Failed: ${err.message}`)
})

Events and health check

import BeeQueue from "bee-queue"

const queue = new BeeQueue("tasks", {
  redis: { host: "localhost", port: 6379 },
})

// Queue-level events:
queue.on("ready", () => console.log("Queue ready"))
queue.on("error", (err) => console.error("Queue error:", err))
queue.on("succeeded", (job, result) => {
  console.log(`Job ${job.id} succeeded:`, result)
})
queue.on("failed", (job, err) => {
  console.error(`Job ${job.id} failed:`, err.message)
})
queue.on("stalled", (jobId) => {
  console.warn(`Job ${jobId} stalled`)
})

// Health check:
const health = await queue.checkHealth()
console.log({
  waiting: health.waiting,
  active: health.active,
  succeeded: health.succeeded,
  failed: health.failed,
  delayed: health.delayed,
})

// Graceful shutdown:
process.on("SIGTERM", async () => {
  await queue.close()
  process.exit(0)
})

pg-boss

pg-boss — PostgreSQL job queue:

Basic queue

import PgBoss from "pg-boss"

// Connect (uses your existing Postgres):
const boss = new PgBoss("postgresql://user:pass@localhost:5432/mydb")
await boss.start()

// Add job:
await boss.send("welcome-email", {
  to: "user@example.com",
  subject: "Welcome to PkgPulse!",
})

// Process jobs:
await boss.work("welcome-email", async (job) => {
  console.log(`Sending email to ${job.data.to}`)
  await sendEmail(job.data)
})

// Graceful shutdown:
process.on("SIGTERM", async () => {
  await boss.stop()
})

Scheduling and options

import PgBoss from "pg-boss"

const boss = new PgBoss(connectionString)
await boss.start()

// Delayed job:
await boss.send("reminder", { userId: "123" }, {
  startAfter: 300,  // 300 seconds from now
})

// Specific time:
await boss.send("report", { type: "monthly" }, {
  startAfter: new Date("2026-04-01T09:00:00Z"),
})

// Cron schedule:
await boss.schedule("daily-cleanup", "0 2 * * *", {
  retentionDays: 7,
})
await boss.work("daily-cleanup", async () => {
  await cleanupOldRecords()
})

// Retry policy:
await boss.send("flaky-api", { url: "..." }, {
  retryLimit: 5,
  retryDelay: 30,      // 30 seconds
  retryBackoff: true,   // Exponential backoff
  expireInMinutes: 60,  // Timeout after 60 min
})

// Priority:
await boss.send("urgent-task", {}, { priority: 1 })
await boss.send("normal-task", {}, { priority: 0 })

Singleton and throttling

import PgBoss from "pg-boss"

const boss = new PgBoss(connectionString)
await boss.start()

// Singleton job (only one active at a time):
await boss.send("sync", {}, {
  singletonKey: "main-sync",
  singletonMinutes: 5,  // At most once per 5 minutes
})

// Debounce (replace pending job):
await boss.send("search-index", { query: "react" }, {
  singletonKey: "reindex",
  singletonSeconds: 30,
})

// Dead letter queue:
await boss.send("process-payment", { orderId: "123" }, {
  retryLimit: 3,
  deadLetter: "failed-payments",
})

// Monitor dead letter queue:
await boss.work("failed-payments", async (job) => {
  console.error("Payment failed after retries:", job.data)
  await alertOps(job.data)
})

Batch and concurrency

import PgBoss from "pg-boss"

const boss = new PgBoss(connectionString)
await boss.start()

// Batch insert:
const jobs = users.map((user) => ({
  name: "onboard-user",
  data: { userId: user.id, email: user.email },
}))
await boss.insert(jobs)

// Process with concurrency:
await boss.work("onboard-user", {
  teamSize: 5,     // 5 concurrent workers
  teamConcurrency: 2,  // Fetch 2 jobs per poll
}, async (job) => {
  await onboardUser(job.data)
})

// Fetch and complete manually:
const job = await boss.fetch("manual-queue")
if (job) {
  try {
    await processJob(job.data)
    await boss.complete(job.id)
  } catch (err) {
    await boss.fail(job.id, err)
  }
}

Feature Comparison

FeatureBullMQBee-Queuepg-boss
BackendRedisRedisPostgreSQL
Priorities
Delayed jobs✅ (limited)
Repeatable/cron✅ (schedule)
Rate limiting
Job flows✅ (parent/child)
Progress tracking
Retry with backoff
Singleton jobs
Dead letter queue
ACID guarantees
Concurrency control
DashboardBull Board
TypeScript✅ (types)
Weekly downloads~500K~100K~100K

When to Use Each

Use BullMQ if:

  • Need a full-featured job queue with Redis
  • Want job flows, rate limiting, and repeatable jobs
  • Building complex workflow orchestration
  • Need a UI dashboard (Bull Board)

Use Bee-Queue if:

  • Want a simple, lightweight Redis queue
  • Need high throughput with minimal overhead
  • Building straightforward job processing
  • Don't need advanced features (flows, rate limiting)

Use pg-boss if:

  • Don't want to manage Redis — just use Postgres
  • Need ACID guarantees for job state
  • Want singleton jobs and dead letter queues
  • Building on PostgreSQL and want one fewer dependency

Redis vs PostgreSQL as a Queue Backend

The choice between Redis (BullMQ, Bee-Queue) and PostgreSQL (pg-boss) is partly a philosophical decision about infrastructure complexity and partly a technical decision about tradeoffs.

Redis is purpose-built for the patterns that job queues need: atomic operations via Lua scripts, sorted sets for priority queuing, pub/sub for worker notifications, and key expiration for job retention. BullMQ's use of Redis Lua scripts means that all queue state transitions (move job from waiting → active, increment attempt counter, update job status) happen atomically at the Redis level — there are no partial state updates that could corrupt queue state under concurrent workers. The result is extremely predictable behavior under load, but it requires running and maintaining Redis alongside your application database.

PostgreSQL's SKIP LOCKED mechanism (which pg-boss uses) achieves similar atomicity at the database level. A SELECT ... FOR UPDATE SKIP LOCKED query atomically acquires a job and marks it active in a single database transaction, preventing two workers from picking up the same job simultaneously. This is the correct pattern for database-backed queues — earlier approaches using optimistic locking or regular SELECT + UPDATE were prone to race conditions under concurrent workers. The advantage: if you're already running PostgreSQL, pg-boss adds zero new infrastructure. The disadvantage: PostgreSQL was not designed as a queue backend, and at very high throughput (thousands of jobs per second), the database locking overhead becomes a bottleneck that Redis handles more gracefully.

The practical decision criteria: if your stack already has both Redis and PostgreSQL, use Redis-based BullMQ for queue-heavy workloads and pg-boss for lower-throughput queues where ACID guarantees matter more. If you're adding async job processing to a system that already has PostgreSQL but not Redis, pg-boss is the right choice — adding Redis for job queuing alone is engineering overhead that pg-boss avoids.

Observability and Monitoring

Running a job queue in production without observability is flying blind. Failed jobs, stalled workers, and queue depth spikes need to be visible before they become user-facing incidents.

BullMQ has the most mature observability tooling. Bull Board (the UI dashboard for BullMQ) provides a web interface showing active queues, job counts by status, job data, retry history, and manual job controls. It's a separate package (@bull-board/express) that mounts as an Express middleware. Beyond the dashboard, BullMQ emits events from the Queue and Worker instances (completed, failed, stalled, progress) that you can pipe to any observability tool. Using these events to update Prometheus counters or write to Datadog gives you time-series data on job throughput, failure rates, and processing duration.

Bee-Queue has similar event hooks but no built-in dashboard — monitoring is done entirely through the event API or the checkHealth() method, which returns queue depth by status. For production, this is usually sufficient when combined with existing observability tooling (Grafana, Datadog), but requires more manual integration work than BullMQ.

pg-boss stores all job state in PostgreSQL tables, which makes observability via SQL queries natural. Standard queries against the pgboss.job table show queue depth, oldest unprocessed job age, and failure counts by queue name. Any PostgreSQL monitoring tool (pganalyze, pg_activity, Datadog's Postgres integration) automatically surfaces these metrics without additional configuration. The tradeoff is that monitoring dashboards designed for Redis-based queues don't work with pg-boss — you build your own queries or use the pg-boss events API.

Worker Architecture and Concurrency Patterns

How you structure workers — how many processes run jobs, at what concurrency — significantly affects both throughput and resource usage.

BullMQ workers run as separate Node.js processes or threads. Each Worker instance connects to Redis and polls for available jobs. The concurrency option controls how many jobs a single Worker instance processes simultaneously. For CPU-intensive jobs, concurrency matches CPU core count; for I/O-bound jobs (API calls, database queries), higher concurrency (20-100) is common. BullMQ also supports Sandboxed processors — running job handlers in child processes to isolate crashes and prevent memory leaks from spreading across the main process. In production, running multiple worker processes (via PM2 or Kubernetes replicas) scales throughput horizontally: each process handles its concurrency limit independently, and Redis coordinates which job goes to which worker.

pg-boss's teamSize option similarly controls concurrent job processing within a single boss.work() call. The teamConcurrency option controls how many jobs the worker fetches per polling interval, reducing database round trips under load. For Node.js services that already have multiple instances (behind a load balancer), every instance can register a boss.work() handler — pg-boss's SKIP LOCKED ensures no two instances process the same job. This makes horizontal scaling natural: add more application instances and queue throughput increases proportionally.

Methodology

Download data from npm registry (weekly average, February 2026). Feature comparison based on BullMQ v5.x, Bee-Queue v1.x, and pg-boss v10.x.

The infrastructure decision implicit in choosing between these three libraries — Redis vs PostgreSQL — has become more nuanced as managed Redis services like Upstash have made Redis available without operational overhead. For teams that previously avoided Redis because self-managed Redis added infrastructure burden, Upstash Redis provides the BullMQ feature set (priorities, flows, rate limiting) with serverless pricing and no server to maintain. This has shifted the PostgreSQL-only advantage of pg-boss from "zero infrastructure cost" to "zero additional service" — a meaningful but narrower distinction. For greenfield projects in 2026, the decision between BullMQ and pg-boss now turns more on job complexity (BullMQ's flows and rate limiting for complex pipelines) than on infrastructure philosophy alone.

Compare job queue and backend libraries on PkgPulse →

See also: AVA vs Jest and Payload CMS vs Strapi vs Directus, amqplib vs KafkaJS vs Redis Streams.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.