Best Node.js Background Job Libraries 2026: BullMQ vs Inngest vs Trigger.dev
BullMQ powers millions of background jobs with Redis as the backbone — you own the infrastructure. Inngest and Trigger.dev take a different approach: your functions run in your existing deployment, and a cloud orchestrator handles retries, scheduling, and state. Trigger.dev v3 removed serverless timeouts entirely by moving to long-running compute. These aren't competing implementations of the same pattern — they represent three different architectural philosophies for background job processing.
TL;DR
BullMQ for Redis-based queues with full infrastructure control — the production standard for Node.js background jobs since 2019. Inngest for event-driven durable functions with excellent step function primitives and a generous free tier. Trigger.dev for open-source, self-hostable job orchestration with no serverless timeout constraints and Apache 2.0 licensing. For most new Node.js projects in 2026, BullMQ or Trigger.dev are the right choices based on whether you want Redis-native queuing or a managed orchestration platform.
Key Takeaways
- BullMQ: 1M+ weekly npm downloads, Redis-based, battle-tested since 2019, v5.70+ active development
- Inngest: YC-backed, event-driven architecture, free tier (5K steps/month), self-hosting available
- Trigger.dev: Apache 2.0 open-source, 5K runs/month free, v3 removed serverless timeouts
- BullMQ: Self-hosted Redis, horizontal scaling, job priorities, rate limiting, flow jobs
- Inngest: Step functions with sleep/wait, fan-out, debounce, event routing, 10M+ monthly steps at scale
- Trigger.dev v3: Long-running compute (minutes/hours), built-in integrations (OpenAI, Resend, Slack)
- BullMQ requires Redis; Inngest/Trigger.dev use PostgreSQL or cloud storage internally
The Problem: Why Background Jobs?
In Node.js web applications, many operations shouldn't block the HTTP response:
- Sending emails after signup
- Resizing uploaded images
- Running AI/LLM inference pipelines
- Processing payments asynchronously
- Syncing data with third-party APIs
- Generating reports or exports
Doing this work in the request handler kills response times and makes retries on failure nearly impossible. Background job libraries decouple enqueuing work from executing it.
BullMQ
Package: bullmq
Weekly downloads: 1M+
GitHub stars: 7K+
Creator: Taskforce.sh team
Requires: Redis
BullMQ is the second generation of Bull — rewritten in TypeScript with a more robust architecture. It's the production standard for Redis-based job queues in Node.js.
Installation
npm install bullmq
# Plus Redis — locally via Docker:
docker run -d -p 6379:6379 redis:alpine
Basic Queue and Worker
import { Queue, Worker, Job } from 'bullmq';
import { Redis } from 'ioredis';
const connection = new Redis({ host: 'localhost', port: 6379 });
// Define the queue
const emailQueue = new Queue('email', { connection });
// Define job types
interface EmailJobData {
to: string;
subject: string;
body: string;
}
// Add a job to the queue
await emailQueue.add('send-welcome', {
to: 'user@example.com',
subject: 'Welcome!',
body: 'Thanks for signing up.',
} satisfies EmailJobData);
// Process jobs in a worker
const worker = new Worker<EmailJobData>(
'email',
async (job: Job<EmailJobData>) => {
const { to, subject, body } = job.data;
await sendEmail({ to, subject, body });
console.log(`Email sent to ${to}`);
},
{ connection }
);
worker.on('completed', (job) => console.log(`Job ${job.id} completed`));
worker.on('failed', (job, err) => console.error(`Job ${job?.id} failed:`, err));
Delayed and Repeatable Jobs
// Delayed job — process after 5 minutes
await emailQueue.add('follow-up', jobData, {
delay: 5 * 60 * 1000, // 5 minutes in ms
});
// Repeatable job — runs every day at 9am
await reportQueue.add(
'daily-report',
{ reportType: 'daily' },
{
repeat: { cron: '0 9 * * *' },
}
);
// Remove all repeatable jobs
const repeatableJobs = await reportQueue.getRepeatableJobs();
for (const job of repeatableJobs) {
await reportQueue.removeRepeatableByKey(job.key);
}
Job Priorities and Rate Limiting
// Priority queuing (lower number = higher priority)
await queue.add('critical-payment', data, { priority: 1 });
await queue.add('low-priority-report', data, { priority: 10 });
// Rate limiting (max 10 jobs per second across all workers)
const worker = new Worker('api-calls', processor, {
connection,
limiter: {
max: 10,
duration: 1000, // per 1000ms
},
});
Flow Jobs (Chained Pipelines)
BullMQ Flows let you chain jobs where child jobs must complete before the parent:
import { FlowProducer } from 'bullmq';
const flow = new FlowProducer({ connection });
await flow.add({
name: 'process-order',
queueName: 'orders',
data: { orderId: '123' },
children: [
{
name: 'charge-payment',
queueName: 'payments',
data: { orderId: '123' },
},
{
name: 'reserve-inventory',
queueName: 'inventory',
data: { orderId: '123' },
},
],
});
// charge-payment and reserve-inventory both complete → process-order runs
BullMQ UI: Bull Board
npm install @bull-board/express @bull-board/api
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
import express from 'express';
const serverAdapter = new ExpressAdapter();
const { addQueue } = createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter,
});
const app = express();
serverAdapter.setBasePath('/admin/queues');
app.use('/admin/queues', serverAdapter.getRouter());
BullMQ Strengths
- Redis-native: extremely reliable, well-understood infrastructure
- Horizontal scaling: run multiple workers across multiple servers
- Feature-complete: priorities, delays, retries, rate limits, flow jobs
- Large ecosystem: Bull Board, BullMQ Pro (commercial), extensive documentation
- 1M+ weekly downloads = significant community, answers, and Stack Overflow support
BullMQ Limitations
- Redis required: operational overhead (Redis management, memory costs)
- Infrastructure ownership: you manage Redis uptime, persistence, and scaling
- No built-in serverless support: workers need persistent processes (not Lambda-friendly)
- Orchestration complexity: complex multi-step pipelines require manual state management
Inngest
Package: inngest
GitHub stars: 11K+
Creator: Inngest Inc. (YC W22)
Architecture: Cloud orchestrator + local functions
Inngest takes a fundamentally different approach: your code runs in your existing server or serverless functions, and Inngest's orchestration engine handles scheduling, retries, and state persistence.
Installation
npm install inngest
Basic Function
import { Inngest } from 'inngest';
const inngest = new Inngest({ id: 'my-app' });
// Define a function triggered by an event
export const sendWelcomeEmail = inngest.createFunction(
{ id: 'send-welcome-email' },
{ event: 'user/signed-up' },
async ({ event, step }) => {
// Each step.run() is independently retried
await step.run('send-email', async () => {
await sendEmail({
to: event.data.email,
subject: 'Welcome!',
body: 'Thanks for signing up.',
});
});
// Sleep for 3 days — the function pauses, no infrastructure needed
await step.sleep('wait-3-days', '3 days');
await step.run('send-followup', async () => {
await sendEmail({
to: event.data.email,
subject: 'How are you settling in?',
body: '...',
});
});
}
);
The critical insight: step.sleep('3 days') doesn't block a process for 3 days. Inngest's orchestrator persists the function state and resumes it after 3 days — zero infrastructure cost during the wait.
Triggering Events
// From your API route or anywhere:
await inngest.send({
name: 'user/signed-up',
data: { email: 'user@example.com', userId: '123' },
});
Step Functions: Fan-Out and Parallelism
export const processOrder = inngest.createFunction(
{ id: 'process-order' },
{ event: 'order/created' },
async ({ event, step }) => {
// Run steps in parallel
const [paymentResult, inventoryResult] = await Promise.all([
step.run('charge-payment', () => chargePayment(event.data.orderId)),
step.run('reserve-inventory', () => reserveInventory(event.data.orderId)),
]);
if (!paymentResult.success) {
await step.run('refund-inventory', () =>
releaseInventory(event.data.orderId)
);
}
await step.run('notify-customer', () =>
sendConfirmation(event.data.email, event.data.orderId)
);
}
);
Serving Inngest in Express / Next.js
// Express
import { serve } from 'inngest/express';
app.use('/api/inngest', serve({ client: inngest, functions: [sendWelcomeEmail] }));
// Next.js App Router
import { serve } from 'inngest/next';
export const { GET, POST, PUT } = serve({
client: inngest,
functions: [sendWelcomeEmail, processOrder],
});
Inngest Pricing
- Free: 50K steps/month
- Pro: $25/month + $0.40 per 1,000 additional steps
- Self-hosting: Available via Docker, free to run the engine
Inngest Limitations
- Cloud-dependent (though self-hosting is available)
- Not open-source (source-available license)
- Steps are billed, which can add up for high-volume applications
- Less community resources than BullMQ
Trigger.dev
Package: @trigger.dev/sdk
GitHub stars: 12K+
Creator: Trigger.dev team
License: Apache 2.0 (fully open-source)
Trigger.dev v3 is a major architectural shift: jobs now run on dedicated long-running compute (not serverless functions), removing the timeout constraints that plagued v2 and most serverless job systems.
Installation
npm install @trigger.dev/sdk
npx trigger.dev@latest init
Basic Task
import { task } from '@trigger.dev/sdk/v3';
import { OpenAI } from 'openai';
const openai = new OpenAI();
export const generateReport = task({
id: 'generate-report',
// Runs for minutes or hours — no serverless timeout
run: async (payload: { userId: string; reportType: string }) => {
const user = await db.users.findById(payload.userId);
// This can take 10 minutes — no timeout on Trigger.dev v3
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: `Generate a ${payload.reportType} report for user ${user.name}`,
},
],
max_tokens: 10000, // Takes time — no problem
});
await db.reports.create({
userId: payload.userId,
content: completion.choices[0].message.content,
});
return { success: true };
},
});
Triggering Tasks
import { generateReport } from './trigger/generate-report';
// From your API handler:
const handle = await generateReport.trigger({
userId: '123',
reportType: 'monthly',
});
// Check status later:
console.log(handle.id); // Use this to poll or query status
Scheduled Tasks
import { schedules } from '@trigger.dev/sdk/v3';
export const dailyCleanup = schedules.task({
id: 'daily-cleanup',
cron: '0 2 * * *', // 2am daily
run: async (payload) => {
await db.sessions.deleteExpired();
await db.tempFiles.deleteOld();
},
});
Built-in Integrations
Trigger.dev v3 includes first-class integrations:
import { task } from '@trigger.dev/sdk/v3';
import { Resend } from '@trigger.dev/resend';
import { OpenAI } from '@trigger.dev/openai';
const resend = new Resend({ id: 'resend', apiKey: process.env.RESEND_API_KEY });
const openai = new OpenAI({ id: 'openai', apiKey: process.env.OPENAI_API_KEY });
export const sendAiEmail = task({
id: 'send-ai-email',
integrations: { resend, openai },
run: async (payload: { to: string; topic: string }, io) => {
const content = await io.openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Write an email about: ${payload.topic}` }],
});
await io.resend.emails.send({
from: 'hello@example.com',
to: payload.to,
subject: `About ${payload.topic}`,
text: content.choices[0].message.content!,
});
},
});
Trigger.dev Pricing
- Free: 5,000 runs/month (managed cloud)
- Paid: Per second of compute + per run (approximately $1/month for 100 runs/day of 10s tasks)
- Self-hosted: Unlimited runs — Apache 2.0, Docker + PostgreSQL
Trigger.dev Self-Hosting
# Docker Compose self-hosting
git clone https://github.com/triggerdotdev/trigger.dev
cd trigger.dev
cp .env.example .env
docker compose up
Full self-hosting gives you unlimited runs with the same feature set as the managed cloud — the biggest advantage over Inngest for cost-sensitive or compliance-heavy teams.
Trigger.dev Limitations
- Newer ecosystem than BullMQ (fewer Stack Overflow answers)
- Requires Trigger.dev cloud or self-hosted server (vs BullMQ's Redis-only)
- v3 dashboard is less mature than Bull Board for BullMQ
Comparison Table
| Feature | BullMQ | Inngest | Trigger.dev |
|---|---|---|---|
| Redis required | Yes | No | No (PostgreSQL) |
| Self-hosted | Yes (you run Redis) | Yes (open-source engine) | Yes (Apache 2.0) |
| Open-source | Yes (MIT) | Source-available | Yes (Apache 2.0) |
| Serverless support | Limited | Excellent | Excellent (v3) |
| Long-running jobs | Yes (persistent workers) | Yes (step sleep) | Yes (dedicated compute) |
| Free tier | Self-host only | 50K steps/month | 5K runs/month |
| Job priorities | Yes | Via functions | Via task config |
| Cron/scheduled jobs | Yes | Yes | Yes |
| Step functions | Manual (flows) | First-class (step.run) | Via subtasks |
| Built-in retries | Yes | Yes | Yes |
| Weekly downloads | 1M+ | Growing | Growing |
Decision Guide
Choose BullMQ if:
- You already have Redis in your infrastructure
- You need maximum control and flexibility over queue behavior
- Your workers run as persistent processes (not serverless)
- You want the most battle-tested, documented, and community-supported solution
- Complex priority queuing or rate limiting is core to your system
Choose Inngest if:
- You're on Vercel, Netlify, or serverless architecture
- Event-driven workflows with multiple steps and waits fit your model
- The step function primitives (sleep, debounce, fan-out) match your use case
- You want a managed solution with minimal ops overhead
Choose Trigger.dev if:
- Open-source licensing (Apache 2.0) is required
- Self-hosting with unlimited runs is important
- You run long AI/LLM jobs that exceed serverless timeouts (minutes, not seconds)
- Trigger.dev's built-in integrations (OpenAI, Resend, Slack) reduce boilerplate
The 2026 Stack
For a new Node.js application in 2026:
-
Traditional API server (Express, Hono, Fastify) → BullMQ + Redis is the proven choice. Familiar infrastructure, well-documented, horizontally scalable.
-
Serverless / edge deployment (Vercel, Cloudflare) → Inngest or Trigger.dev. BullMQ workers don't fit the serverless model.
-
AI pipeline processing (LLM inference, report generation) → Trigger.dev v3. No timeouts, built-in OpenAI integration, Apache 2.0 for self-hosting.
Compare download trends on PkgPulse.
See the live comparison
View best nodejs background job libraries on PkgPulse →