BullMQ vs Bee-Queue vs pg-boss: Job Queues in Node.js (2026)
TL;DR
BullMQ is the full-featured Redis job queue — priorities, delayed jobs, rate limiting, repeatable jobs, flows/dependencies, the successor to Bull, most popular Node.js queue. Bee-Queue is the lightweight Redis queue — simple API, high throughput, minimal overhead, good for straightforward job processing. pg-boss is the PostgreSQL job queue — no Redis needed, uses SKIP LOCKED for safe concurrency, ACID guarantees, works with your existing Postgres. In 2026: BullMQ for feature-rich job processing, Bee-Queue for simple high-throughput queues, pg-boss for PostgreSQL-based queues without Redis.
Key Takeaways
- BullMQ: ~500K weekly downloads — Redis, priorities, flows, rate limiting, repeatable
- Bee-Queue: ~100K weekly downloads — Redis, simple, fast, lightweight
- pg-boss: ~100K weekly downloads — PostgreSQL, ACID, no Redis, SKIP LOCKED
- BullMQ has the richest feature set (flows, rate limiting, groups)
- pg-boss eliminates Redis dependency — uses your existing Postgres
- Bee-Queue is the simplest option for basic job processing
BullMQ
BullMQ — feature-rich Redis queue:
Basic queue
import { Queue, Worker } from "bullmq"
const connection = { host: "localhost", port: 6379 }
// Create queue:
const emailQueue = new Queue("emails", { connection })
// Add job:
await emailQueue.add("welcome-email", {
to: "user@example.com",
subject: "Welcome to PkgPulse!",
template: "welcome",
})
// Process jobs:
const worker = new Worker("emails", async (job) => {
console.log(`Sending email to ${job.data.to}`)
await sendEmail(job.data)
return { sent: true }
}, { connection })
// Events:
worker.on("completed", (job, result) => {
console.log(`Job ${job.id} completed:`, result)
})
worker.on("failed", (job, err) => {
console.error(`Job ${job?.id} failed:`, err.message)
})
Delayed and scheduled jobs
import { Queue } from "bullmq"
const queue = new Queue("tasks", { connection })
// Delayed job (run in 5 minutes):
await queue.add("reminder", { userId: "123" }, {
delay: 5 * 60 * 1000,
})
// Repeatable job (every hour):
await queue.add("sync-data", {}, {
repeat: {
every: 60 * 60 * 1000,
},
})
// Cron schedule:
await queue.add("daily-report", {}, {
repeat: {
pattern: "0 9 * * *", // 9am daily
tz: "America/New_York",
},
})
// Priority (lower = higher priority):
await queue.add("urgent", { alert: true }, { priority: 1 })
await queue.add("normal", { data: "..." }, { priority: 5 })
await queue.add("low", { cleanup: true }, { priority: 10 })
Job flows (parent/child dependencies)
import { FlowProducer } from "bullmq"
const flow = new FlowProducer({ connection })
// Parent job depends on children:
await flow.add({
name: "generate-report",
queueName: "reports",
data: { reportId: "monthly-2026-03" },
children: [
{
name: "fetch-downloads",
queueName: "data",
data: { metric: "downloads" },
},
{
name: "fetch-stars",
queueName: "data",
data: { metric: "stars" },
},
{
name: "fetch-issues",
queueName: "data",
data: { metric: "issues" },
},
],
})
// Parent runs only after all children complete
Rate limiting and retries
import { Queue, Worker } from "bullmq"
// Rate-limited worker:
const worker = new Worker("api-calls", async (job) => {
await callExternalAPI(job.data)
}, {
connection,
limiter: {
max: 10, // Max 10 jobs
duration: 1000, // Per second
},
})
// Job with retry:
await queue.add("flaky-task", { url: "..." }, {
attempts: 5,
backoff: {
type: "exponential",
delay: 1000, // 1s, 2s, 4s, 8s, 16s
},
})
// Custom backoff:
await queue.add("custom-retry", {}, {
attempts: 3,
backoff: {
type: "custom",
},
})
// In worker:
const worker2 = new Worker("tasks", processor, {
connection,
settings: {
backoffStrategy: (attemptsMade) => {
return attemptsMade * 5000 // 5s, 10s, 15s
},
},
})
Bee-Queue
Bee-Queue — lightweight Redis queue:
Basic queue
import BeeQueue from "bee-queue"
// Create queue:
const queue = new BeeQueue("emails", {
redis: { host: "localhost", port: 6379 },
isWorker: true,
removeOnSuccess: true,
removeOnFailure: false,
})
// Add job:
const job = queue.createJob({
to: "user@example.com",
subject: "Welcome!",
template: "welcome",
})
job.timeout(10000) // 10s timeout
.retries(3) // 3 retries
.backoff("exponential", 1000)
.save()
// Process jobs:
queue.process(async (job) => {
console.log(`Processing job ${job.id}:`, job.data)
await sendEmail(job.data)
return { sent: true, timestamp: Date.now() }
})
Concurrency
import BeeQueue from "bee-queue"
const queue = new BeeQueue("image-processing", {
redis: { host: "localhost", port: 6379 },
})
// Process 5 jobs concurrently:
queue.process(5, async (job) => {
const { imageUrl, width, height } = job.data
// Report progress:
job.reportProgress(10)
const image = await downloadImage(imageUrl)
job.reportProgress(50)
const resized = await resizeImage(image, width, height)
job.reportProgress(90)
const url = await uploadImage(resized)
job.reportProgress(100)
return { url }
})
// Track progress:
const job = await queue.createJob({ imageUrl: "...", width: 800, height: 600 }).save()
job.on("progress", (progress) => {
console.log(`Job ${job.id}: ${progress}%`)
})
job.on("succeeded", (result) => {
console.log(`Done: ${result.url}`)
})
job.on("failed", (err) => {
console.error(`Failed: ${err.message}`)
})
Events and health check
import BeeQueue from "bee-queue"
const queue = new BeeQueue("tasks", {
redis: { host: "localhost", port: 6379 },
})
// Queue-level events:
queue.on("ready", () => console.log("Queue ready"))
queue.on("error", (err) => console.error("Queue error:", err))
queue.on("succeeded", (job, result) => {
console.log(`Job ${job.id} succeeded:`, result)
})
queue.on("failed", (job, err) => {
console.error(`Job ${job.id} failed:`, err.message)
})
queue.on("stalled", (jobId) => {
console.warn(`Job ${jobId} stalled`)
})
// Health check:
const health = await queue.checkHealth()
console.log({
waiting: health.waiting,
active: health.active,
succeeded: health.succeeded,
failed: health.failed,
delayed: health.delayed,
})
// Graceful shutdown:
process.on("SIGTERM", async () => {
await queue.close()
process.exit(0)
})
pg-boss
pg-boss — PostgreSQL job queue:
Basic queue
import PgBoss from "pg-boss"
// Connect (uses your existing Postgres):
const boss = new PgBoss("postgresql://user:pass@localhost:5432/mydb")
await boss.start()
// Add job:
await boss.send("welcome-email", {
to: "user@example.com",
subject: "Welcome to PkgPulse!",
})
// Process jobs:
await boss.work("welcome-email", async (job) => {
console.log(`Sending email to ${job.data.to}`)
await sendEmail(job.data)
})
// Graceful shutdown:
process.on("SIGTERM", async () => {
await boss.stop()
})
Scheduling and options
import PgBoss from "pg-boss"
const boss = new PgBoss(connectionString)
await boss.start()
// Delayed job:
await boss.send("reminder", { userId: "123" }, {
startAfter: 300, // 300 seconds from now
})
// Specific time:
await boss.send("report", { type: "monthly" }, {
startAfter: new Date("2026-04-01T09:00:00Z"),
})
// Cron schedule:
await boss.schedule("daily-cleanup", "0 2 * * *", {
retentionDays: 7,
})
await boss.work("daily-cleanup", async () => {
await cleanupOldRecords()
})
// Retry policy:
await boss.send("flaky-api", { url: "..." }, {
retryLimit: 5,
retryDelay: 30, // 30 seconds
retryBackoff: true, // Exponential backoff
expireInMinutes: 60, // Timeout after 60 min
})
// Priority:
await boss.send("urgent-task", {}, { priority: 1 })
await boss.send("normal-task", {}, { priority: 0 })
Singleton and throttling
import PgBoss from "pg-boss"
const boss = new PgBoss(connectionString)
await boss.start()
// Singleton job (only one active at a time):
await boss.send("sync", {}, {
singletonKey: "main-sync",
singletonMinutes: 5, // At most once per 5 minutes
})
// Debounce (replace pending job):
await boss.send("search-index", { query: "react" }, {
singletonKey: "reindex",
singletonSeconds: 30,
})
// Dead letter queue:
await boss.send("process-payment", { orderId: "123" }, {
retryLimit: 3,
deadLetter: "failed-payments",
})
// Monitor dead letter queue:
await boss.work("failed-payments", async (job) => {
console.error("Payment failed after retries:", job.data)
await alertOps(job.data)
})
Batch and concurrency
import PgBoss from "pg-boss"
const boss = new PgBoss(connectionString)
await boss.start()
// Batch insert:
const jobs = users.map((user) => ({
name: "onboard-user",
data: { userId: user.id, email: user.email },
}))
await boss.insert(jobs)
// Process with concurrency:
await boss.work("onboard-user", {
teamSize: 5, // 5 concurrent workers
teamConcurrency: 2, // Fetch 2 jobs per poll
}, async (job) => {
await onboardUser(job.data)
})
// Fetch and complete manually:
const job = await boss.fetch("manual-queue")
if (job) {
try {
await processJob(job.data)
await boss.complete(job.id)
} catch (err) {
await boss.fail(job.id, err)
}
}
Feature Comparison
| Feature | BullMQ | Bee-Queue | pg-boss |
|---|---|---|---|
| Backend | Redis | Redis | PostgreSQL |
| Priorities | ✅ | ❌ | ✅ |
| Delayed jobs | ✅ | ✅ (limited) | ✅ |
| Repeatable/cron | ✅ | ❌ | ✅ (schedule) |
| Rate limiting | ✅ | ❌ | ❌ |
| Job flows | ✅ (parent/child) | ❌ | ❌ |
| Progress tracking | ✅ | ✅ | ❌ |
| Retry with backoff | ✅ | ✅ | ✅ |
| Singleton jobs | ❌ | ❌ | ✅ |
| Dead letter queue | ❌ | ❌ | ✅ |
| ACID guarantees | ❌ | ❌ | ✅ |
| Concurrency control | ✅ | ✅ | ✅ |
| Dashboard | Bull Board | ❌ | ❌ |
| TypeScript | ✅ | ✅ (types) | ✅ |
| Weekly downloads | ~500K | ~100K | ~100K |
When to Use Each
Use BullMQ if:
- Need a full-featured job queue with Redis
- Want job flows, rate limiting, and repeatable jobs
- Building complex workflow orchestration
- Need a UI dashboard (Bull Board)
Use Bee-Queue if:
- Want a simple, lightweight Redis queue
- Need high throughput with minimal overhead
- Building straightforward job processing
- Don't need advanced features (flows, rate limiting)
Use pg-boss if:
- Don't want to manage Redis — just use Postgres
- Need ACID guarantees for job state
- Want singleton jobs and dead letter queues
- Building on PostgreSQL and want one fewer dependency
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on BullMQ v5.x, Bee-Queue v1.x, and pg-boss v10.x.