BullMQ vs Inngest vs Trigger.dev 2026: Node.js Jobs
BullMQ powers millions of background jobs with Redis as the backbone — you own the infrastructure. Inngest and Trigger.dev take a different approach: your functions run in your existing deployment, and a cloud orchestrator handles retries, scheduling, and state. Trigger.dev v3 removed serverless timeouts entirely by moving to long-running compute. These aren't competing implementations of the same pattern — they represent three different architectural philosophies for background job processing.
The Background Job Problem
Background jobs are the infrastructure that handles everything your HTTP request handlers can't. When a user signs up, you shouldn't make them wait while you send a welcome email, generate a PDF, provision their account on external services, and log an analytics event. When a payment webhook arrives, you need to process it reliably even if your database is briefly unavailable. When you need to send 10,000 emails to a newsletter list, you need to do it in batches that respect rate limits, with retries for failures, without timing out your server.
The core challenge is reliability. A simple setTimeout is fine for fire-and-forget tasks in a single process, but it doesn't survive server restarts, doesn't retry on failure, and gives you no visibility into what's running. Production background job systems need at-least-once delivery guarantees, configurable retry behavior with backoff, observability into job status, and either state persistence (so jobs survive restarts) or external orchestration.
What makes the 2026 landscape interesting is that serverless deployment changed the requirements. If your app runs on Vercel, you can't maintain a persistent worker process or a long-lived Redis connection — your functions spin up and down on demand. Inngest and Trigger.dev were built specifically to solve this: they handle durability externally, letting your serverless functions act as job handlers invoked over HTTP. This architectural difference is the key axis for choosing between the three libraries.
The good news is that all three options are mature and production-ready in 2026. The npm download numbers reflect genuine adoption: BullMQ's 1M weekly downloads come from years of being the default Redis queue for Node.js, while Inngest and Trigger.dev's fast growth reflects real migration of teams from Redis-based queues to managed alternatives. There's no wrong answer here — the right choice depends on your deployment environment, operational preferences, and the complexity of your job workflows.
TL;DR
BullMQ for Redis-based queues with full infrastructure control — the production standard for Node.js background jobs since 2019. Inngest for event-driven durable functions with excellent step function primitives and a generous free tier. Trigger.dev for open-source, self-hostable job orchestration with no serverless timeout constraints and Apache 2.0 licensing. For most new Node.js projects in 2026, BullMQ or Trigger.dev are the right choices based on whether you want Redis-native queuing or a managed orchestration platform.
Key Takeaways
- BullMQ: 1M+ weekly npm downloads, Redis-based, battle-tested since 2019, v5.70+ active development
- Inngest: YC-backed, event-driven architecture, free tier (5K steps/month), self-hosting available
- Trigger.dev: Apache 2.0 open-source, 5K runs/month free, v3 removed serverless timeouts
- BullMQ: Self-hosted Redis, horizontal scaling, job priorities, rate limiting, flow jobs
- Inngest: Step functions with sleep/wait, fan-out, debounce, event routing, 10M+ monthly steps at scale
- Trigger.dev v3: Long-running compute (minutes/hours), built-in integrations (OpenAI, Resend, Slack)
- BullMQ requires Redis; Inngest/Trigger.dev use PostgreSQL or cloud storage internally
The Problem: Why Background Jobs?
In Node.js web applications, many operations shouldn't block the HTTP response:
- Sending emails after signup
- Resizing uploaded images
- Running AI/LLM inference pipelines
- Processing payments asynchronously
- Syncing data with third-party APIs
- Generating reports or exports
Doing this work in the request handler kills response times and makes retries on failure nearly impossible. Background job libraries decouple enqueuing work from executing it.
BullMQ
Package: bullmq
Weekly downloads: 1M+
GitHub stars: 7K+
Creator: Taskforce.sh team
Requires: Redis
BullMQ is the second generation of Bull — rewritten in TypeScript with a more robust architecture. It's the production standard for Redis-based job queues in Node.js.
Installation
npm install bullmq
# Plus Redis — locally via Docker:
docker run -d -p 6379:6379 redis:alpine
Basic Queue and Worker
import { Queue, Worker, Job } from 'bullmq';
import { Redis } from 'ioredis';
const connection = new Redis({ host: 'localhost', port: 6379 });
// Define the queue
const emailQueue = new Queue('email', { connection });
// Define job types
interface EmailJobData {
to: string;
subject: string;
body: string;
}
// Add a job to the queue
await emailQueue.add('send-welcome', {
to: 'user@example.com',
subject: 'Welcome!',
body: 'Thanks for signing up.',
} satisfies EmailJobData);
// Process jobs in a worker
const worker = new Worker<EmailJobData>(
'email',
async (job: Job<EmailJobData>) => {
const { to, subject, body } = job.data;
await sendEmail({ to, subject, body });
console.log(`Email sent to ${to}`);
},
{ connection }
);
worker.on('completed', (job) => console.log(`Job ${job.id} completed`));
worker.on('failed', (job, err) => console.error(`Job ${job?.id} failed:`, err));
Delayed and Repeatable Jobs
// Delayed job — process after 5 minutes
await emailQueue.add('follow-up', jobData, {
delay: 5 * 60 * 1000, // 5 minutes in ms
});
// Repeatable job — runs every day at 9am
await reportQueue.add(
'daily-report',
{ reportType: 'daily' },
{
repeat: { cron: '0 9 * * *' },
}
);
// Remove all repeatable jobs
const repeatableJobs = await reportQueue.getRepeatableJobs();
for (const job of repeatableJobs) {
await reportQueue.removeRepeatableByKey(job.key);
}
Job Priorities and Rate Limiting
// Priority queuing (lower number = higher priority)
await queue.add('critical-payment', data, { priority: 1 });
await queue.add('low-priority-report', data, { priority: 10 });
// Rate limiting (max 10 jobs per second across all workers)
const worker = new Worker('api-calls', processor, {
connection,
limiter: {
max: 10,
duration: 1000, // per 1000ms
},
});
Flow Jobs (Chained Pipelines)
BullMQ Flows let you chain jobs where child jobs must complete before the parent:
import { FlowProducer } from 'bullmq';
const flow = new FlowProducer({ connection });
await flow.add({
name: 'process-order',
queueName: 'orders',
data: { orderId: '123' },
children: [
{
name: 'charge-payment',
queueName: 'payments',
data: { orderId: '123' },
},
{
name: 'reserve-inventory',
queueName: 'inventory',
data: { orderId: '123' },
},
],
});
// charge-payment and reserve-inventory both complete → process-order runs
BullMQ UI: Bull Board
npm install @bull-board/express @bull-board/api
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
import express from 'express';
const serverAdapter = new ExpressAdapter();
const { addQueue } = createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter,
});
const app = express();
serverAdapter.setBasePath('/admin/queues');
app.use('/admin/queues', serverAdapter.getRouter());
BullMQ Strengths
- Redis-native: extremely reliable, well-understood infrastructure
- Horizontal scaling: run multiple workers across multiple servers
- Feature-complete: priorities, delays, retries, rate limits, flow jobs
- Large ecosystem: Bull Board, BullMQ Pro (commercial), extensive documentation
- 1M+ weekly downloads = significant community, answers, and Stack Overflow support
BullMQ Limitations
- Redis required: operational overhead (Redis management, memory costs)
- Infrastructure ownership: you manage Redis uptime, persistence, and scaling
- No built-in serverless support: workers need persistent processes (not Lambda-friendly)
- Orchestration complexity: complex multi-step pipelines require manual state management
Inngest
Package: inngest
GitHub stars: 11K+
Creator: Inngest Inc. (YC W22)
Architecture: Cloud orchestrator + local functions
Inngest takes a fundamentally different approach: your code runs in your existing server or serverless functions, and Inngest's orchestration engine handles scheduling, retries, and state persistence.
Installation
npm install inngest
Basic Function
import { Inngest } from 'inngest';
const inngest = new Inngest({ id: 'my-app' });
// Define a function triggered by an event
export const sendWelcomeEmail = inngest.createFunction(
{ id: 'send-welcome-email' },
{ event: 'user/signed-up' },
async ({ event, step }) => {
// Each step.run() is independently retried
await step.run('send-email', async () => {
await sendEmail({
to: event.data.email,
subject: 'Welcome!',
body: 'Thanks for signing up.',
});
});
// Sleep for 3 days — the function pauses, no infrastructure needed
await step.sleep('wait-3-days', '3 days');
await step.run('send-followup', async () => {
await sendEmail({
to: event.data.email,
subject: 'How are you settling in?',
body: '...',
});
});
}
);
The critical insight: step.sleep('3 days') doesn't block a process for 3 days. Inngest's orchestrator persists the function state and resumes it after 3 days — zero infrastructure cost during the wait.
Triggering Events
// From your API route or anywhere:
await inngest.send({
name: 'user/signed-up',
data: { email: 'user@example.com', userId: '123' },
});
Step Functions: Fan-Out and Parallelism
export const processOrder = inngest.createFunction(
{ id: 'process-order' },
{ event: 'order/created' },
async ({ event, step }) => {
// Run steps in parallel
const [paymentResult, inventoryResult] = await Promise.all([
step.run('charge-payment', () => chargePayment(event.data.orderId)),
step.run('reserve-inventory', () => reserveInventory(event.data.orderId)),
]);
if (!paymentResult.success) {
await step.run('refund-inventory', () =>
releaseInventory(event.data.orderId)
);
}
await step.run('notify-customer', () =>
sendConfirmation(event.data.email, event.data.orderId)
);
}
);
Serving Inngest in Express / Next.js
// Express
import { serve } from 'inngest/express';
app.use('/api/inngest', serve({ client: inngest, functions: [sendWelcomeEmail] }));
// Next.js App Router
import { serve } from 'inngest/next';
export const { GET, POST, PUT } = serve({
client: inngest,
functions: [sendWelcomeEmail, processOrder],
});
Inngest Pricing
- Free: 50K steps/month
- Pro: $25/month + $0.40 per 1,000 additional steps
- Self-hosting: Available via Docker, free to run the engine
Inngest Limitations
- Cloud-dependent (though self-hosting is available)
- Not open-source (source-available license)
- Steps are billed, which can add up for high-volume applications
- Less community resources than BullMQ
Trigger.dev
Package: @trigger.dev/sdk
GitHub stars: 12K+
Creator: Trigger.dev team
License: Apache 2.0 (fully open-source)
Trigger.dev v3 is a major architectural shift: jobs now run on dedicated long-running compute (not serverless functions), removing the timeout constraints that plagued v2 and most serverless job systems.
Installation
npm install @trigger.dev/sdk
npx trigger.dev@latest init
Basic Task
import { task } from '@trigger.dev/sdk/v3';
import { OpenAI } from 'openai';
const openai = new OpenAI();
export const generateReport = task({
id: 'generate-report',
// Runs for minutes or hours — no serverless timeout
run: async (payload: { userId: string; reportType: string }) => {
const user = await db.users.findById(payload.userId);
// This can take 10 minutes — no timeout on Trigger.dev v3
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: `Generate a ${payload.reportType} report for user ${user.name}`,
},
],
max_tokens: 10000, // Takes time — no problem
});
await db.reports.create({
userId: payload.userId,
content: completion.choices[0].message.content,
});
return { success: true };
},
});
Triggering Tasks
import { generateReport } from './trigger/generate-report';
// From your API handler:
const handle = await generateReport.trigger({
userId: '123',
reportType: 'monthly',
});
// Check status later:
console.log(handle.id); // Use this to poll or query status
Batch Triggering
Trigger.dev v3 supports batch triggering, which allows submitting hundreds of job payloads in a single API call. This is important for fan-out scenarios like processing a list of uploaded files or kicking off per-user jobs after a bulk import:
await processSignupTask.batchTrigger([
{ payload: { userId: '1', email: 'a@example.com', name: 'Alice' } },
{ payload: { userId: '2', email: 'b@example.com', name: 'Bob' } },
]);
Rather than triggering each job individually in a loop, batchTrigger([...]) enqueues all payloads atomically, reducing API round-trips and ensuring the full batch is submitted even if the calling process is interrupted.
Scheduled Tasks
import { schedules } from '@trigger.dev/sdk/v3';
export const dailyCleanup = schedules.task({
id: 'daily-cleanup',
cron: '0 2 * * *', // 2am daily
run: async (payload) => {
await db.sessions.deleteExpired();
await db.tempFiles.deleteOld();
},
});
Built-in Integrations
Trigger.dev v3 includes first-class integrations:
import { task } from '@trigger.dev/sdk/v3';
import { Resend } from '@trigger.dev/resend';
import { OpenAI } from '@trigger.dev/openai';
const resend = new Resend({ id: 'resend', apiKey: process.env.RESEND_API_KEY });
const openai = new OpenAI({ id: 'openai', apiKey: process.env.OPENAI_API_KEY });
export const sendAiEmail = task({
id: 'send-ai-email',
integrations: { resend, openai },
run: async (payload: { to: string; topic: string }, io) => {
const content = await io.openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Write an email about: ${payload.topic}` }],
});
await io.resend.emails.send({
from: 'hello@example.com',
to: payload.to,
subject: `About ${payload.topic}`,
text: content.choices[0].message.content!,
});
},
});
Trigger.dev Pricing
- Free: 5,000 runs/month (managed cloud)
- Paid: Per second of compute + per run (approximately $1/month for 100 runs/day of 10s tasks)
- Self-hosted: Unlimited runs — Apache 2.0, Docker + PostgreSQL
Trigger.dev Self-Hosting
# Docker Compose self-hosting
git clone https://github.com/triggerdotdev/trigger.dev
cd trigger.dev
cp .env.example .env
docker compose up
Full self-hosting gives you unlimited runs with the same feature set as the managed cloud — the biggest advantage over Inngest for cost-sensitive or compliance-heavy teams.
Trigger.dev Limitations
- Newer ecosystem than BullMQ (fewer Stack Overflow answers)
- Requires Trigger.dev cloud or self-hosted server (vs BullMQ's Redis-only)
- v3 dashboard is less mature than Bull Board for BullMQ
Comparison Table
| Feature | BullMQ | Inngest | Trigger.dev |
|---|---|---|---|
| Redis required | Yes | No | No (PostgreSQL) |
| Self-hosted | Yes (you run Redis) | Yes (open-source engine) | Yes (Apache 2.0) |
| Open-source | Yes (MIT) | Source-available | Yes (Apache 2.0) |
| Serverless support | Limited | Excellent | Excellent (v3) |
| Long-running jobs | Yes (persistent workers) | Yes (step sleep) | Yes (dedicated compute) |
| Free tier | Self-host only | 50K steps/month | 5K runs/month |
| Job priorities | Yes | Via functions | Via task config |
| Cron/scheduled jobs | Yes | Yes | Yes |
| Step functions | Manual (flows) | First-class (step.run) | Via subtasks |
| Built-in retries | Yes | Yes | Yes |
| Weekly downloads | 1M+ | Growing | Growing |
Decision Guide
Choose BullMQ if:
- You already have Redis in your infrastructure
- You need maximum control and flexibility over queue behavior
- Your workers run as persistent processes (not serverless)
- You want the most battle-tested, documented, and community-supported solution
- Complex priority queuing or rate limiting is core to your system
Choose Inngest if:
- You're on Vercel, Netlify, or serverless architecture
- Event-driven workflows with multiple steps and waits fit your model
- The step function primitives (sleep, debounce, fan-out) match your use case
- You want a managed solution with minimal ops overhead
Choose Trigger.dev if:
- Open-source licensing (Apache 2.0) is required
- Self-hosting with unlimited runs is important
- You run long AI/LLM jobs that exceed serverless timeouts (minutes, not seconds)
- Trigger.dev's built-in integrations (OpenAI, Resend, Slack) reduce boilerplate
The 2026 Stack
For a new Node.js application in 2026:
-
Traditional API server (Express, Hono, Fastify) → BullMQ + Redis is the proven choice. Familiar infrastructure, well-documented, horizontally scalable.
-
Serverless / edge deployment (Vercel, Cloudflare) → Inngest or Trigger.dev. BullMQ workers don't fit the serverless model.
-
AI pipeline processing (LLM inference, report generation) → Trigger.dev v3. No timeouts, built-in OpenAI integration, Apache 2.0 for self-hosting.
Migrating Between Libraries
A common path is starting with BullMQ on a self-hosted server, then needing to move to Inngest or Trigger.dev when migrating to a serverless deployment. The conceptual migration is straightforward but requires rethinking how you structure your jobs.
BullMQ's producer-consumer model maps cleanly to Inngest's event model: instead of queue.add('job-name', payload), you call inngest.send({ name: 'event-name', data: payload }). Instead of a Worker that processes jobs, you define functions with inngest.createFunction(). The main structural change is adopting step-based organization — the code in your BullMQ worker handler gets split into discrete step.run() calls, each with a unique name. This provides the step-level retry behavior that makes Inngest work correctly on serverless platforms.
For teams moving from the original bull library (BullMQ's predecessor) to BullMQ, the migration is largely mechanical. The BullMQ API is intentionally similar to Bull's, with TypeScript types and some API cleanup. Most Bull code runs with minor modifications after installing BullMQ.
Managing Redis Costs at Scale
If you choose BullMQ, Redis cost management matters at higher job volumes. The default BullMQ configuration keeps job records indefinitely, which consumes significant Redis memory over time. The removeOnComplete and removeOnFail options control retention — keeping the last 100–200 completed jobs gives you enough visibility for debugging without unbounded storage growth:
await queue.add('job-name', payload, {
removeOnComplete: 100, // Keep last 100 completed jobs in Redis
removeOnFail: 200, // Keep last 200 failed jobs for debugging
});
Upstash is popular for BullMQ in lower-volume applications: serverless Redis with a per-request pricing model, free for the first 10,000 requests per day. For higher-volume scenarios (millions of jobs per day), a dedicated Redis instance on Railway or managed ElastiCache typically provides better performance at lower cost than per-request pricing. For Inngest and Trigger.dev, pricing is based on event/run volume rather than infrastructure. The math typically favors self-hosted Redis via BullMQ at very high volumes; managed pricing is competitive at lower and mid volumes where the operational simplicity has meaningful value.
Observability and Monitoring in Production
Background job systems fail in ways that are invisible without deliberate observability investment. A job that silently retries five times before failing leaves no trace in your application logs unless you instrument the failure events. BullMQ emits events on every state transition — active, completed, failed, stalled — and the stalled event is particularly important: it fires when a worker process crashes while processing a job, triggering the job to be re-queued for another worker. Without monitoring the failed and stalled events and connecting them to an alerting system, BullMQ failures accumulate in the failed job queue invisibly. Bull Board provides a UI for inspecting queued, active, and failed jobs, but it requires you to know to look — alerts are better than dashboards for production failures. Inngest and Trigger.dev build observability into their managed infrastructure: every function run has a trace in the dashboard with step-level timing and error details, making debugging much faster without requiring custom instrumentation.
The practical recommendation: set up alerting for any job that exhausts its retry budget. A job silently sitting in a dead-letter queue is operationally invisible until someone checks the dashboard manually. Use failure handlers to push critical failures into your existing observability pipeline — your on-call stack should know when background jobs fail, not just when HTTP requests return errors.
Failure Handling and Dead-Letter Queues
How you handle exhausted retries is as important as how you configure retries. All three libraries support retry with configurable backoff, but they differ in what happens when a job exhausts its retry budget.
BullMQ stores failed jobs in a "failed" Redis state where they remain visible in Bull Board until manually retried or deleted. The failedReason and stacktrace fields on the job object let you query failures programmatically. The practical recommendation is to implement failure event handlers that push final failures into your existing alerting pipeline:
const worker = new Worker('email', processor, { connection });
worker.on('failed', (job, err) => {
if (job && job.attemptsMade >= (job.opts.attempts ?? 1)) {
// Job exhausted all retries — route to on-call alerting
alerting.notify(`Job ${job.id} failed after ${job.attemptsMade} attempts: ${err.message}`);
}
});
Inngest supports failure functions: you define a function that triggers when another function's run fails, enabling compensating actions, audit logging, or notifications without changing the original function's code:
export const onSignupFailed = inngest.createFunction(
{ id: 'on-signup-failed' },
{ event: 'inngest/function.failed', if: 'event.data.function_id == "process-signup"' },
async ({ event }) => {
await alerting.notify(`Signup processing failed for run ${event.data.run_id}`);
}
);
Trigger.dev supports onFailure handlers at the task level for the same pattern. For all three libraries, connecting final failures to your on-call alerting (Slack, PagerDuty, error tracking) is not optional in production. Background jobs that fail silently accumulate invisibly until a user complains — by which point the issue is usually hours old. Alerts on final failure are cheaper than dashboards you have to remember to check.
Idempotency Design for Background Jobs
All three systems have retry mechanisms, which means your job handlers will sometimes execute more than once for the same job — network failures, worker crashes, and timeout retries all trigger re-execution. Designing job handlers to be idempotent (safe to run multiple times with the same input) is not optional in production. The standard pattern is to use the job's unique ID as an idempotency key: before performing a side effect (sending an email, charging a card, writing a database record), check whether a record with that job ID already exists. BullMQ provides a stable job.id that persists through retries. Inngest's step execution model provides built-in idempotency at the step level — each step.run() is only executed once per run, with the result memoized in Inngest's storage even across retries. Trigger.dev's task runs have a stable handle.id that you can store before the task executes and use as an idempotency key in your handler.
Compare download trends on PkgPulse.
See also: Best Node.js Background Job 2026 and Motia: #1 Backend in JS Rising Stars 2025, Hatchet vs Trigger.dev v3 vs Inngest.
See the live comparison
View best nodejs background job libraries on PkgPulse →