Skip to main content

Guide

Best Node.js Background Job 2026

Three approaches to background jobs in Node.js — BullMQ (Redis queues), Inngest (serverless durable functions), and Trigger.dev (cloud-hosted job platform).

·PkgPulse Team·
0

The Background Job Problem in Modern Node.js

Background jobs are the infrastructure that handles everything your HTTP request handlers can't. When a user signs up, you shouldn't make them wait while you send a welcome email, generate a PDF, provision their account on external services, and log an analytics event. When a payment webhook arrives, you need to process it reliably even if your database is briefly unavailable. When you need to send 10,000 emails to a newsletter list, you need to do it in batches that respect rate limits, with retries for failures, without timing out your server.

The core challenge is reliability. A simple setTimeout or setImmediate is fine for fire-and-forget tasks in a single process, but it doesn't survive server restarts, doesn't retry on failure, doesn't handle rate limiting, and doesn't give you any visibility into what's running. Production background job systems need at-least-once delivery guarantees, configurable retry behavior with backoff, observability into job status, and either state persistence (so jobs survive restarts) or external orchestration.

What makes the 2026 landscape interesting is that serverless deployment changed the requirements. If your app runs on Vercel, you can't maintain a persistent worker process or a BullMQ connection to Redis — your functions spin up and down on demand. Inngest and Trigger.dev were built specifically to solve this: they handle durability externally, letting your serverless functions act as job handlers that get invoked over HTTP. This architectural difference is the key axis for choosing between the three libraries.

The good news is that all three options are mature and production-ready in 2026. The npm download numbers reflect genuine adoption: BullMQ's 1M weekly downloads come from years of being the default Redis queue for Node.js, while Inngest and Trigger.dev's fast growth reflects real migration of teams from Redis-based queues to managed alternatives. There's no wrong answer here — the right choice depends on your deployment environment, operational preferences, and the complexity of your job workflows.

TL;DR

It depends on your infrastructure. BullMQ is the Redis-based standard — battle-tested, self-hosted, millions of production deployments, but requires a Redis server. Inngest runs on your existing serverless infrastructure (no Redis) — define functions that run reliably even on Vercel Edge. Trigger.dev is a fully managed cloud platform where jobs are durable, retryable, and observable without running your own infrastructure. For Vercel/serverless: Inngest. For self-hosted Node.js: BullMQ. For cloud-first with great observability: Trigger.dev.

Key Takeaways

  • BullMQ: 1M downloads/week, Redis-backed, producer-consumer pattern, mature ecosystem
  • Inngest: 300K downloads/week, serverless-native, no Redis needed, step.run() for durability
  • Trigger.dev: 150K downloads/week, fully managed, excellent dashboard, free tier
  • Vercel deployments: Inngest or Trigger.dev (no long-running processes on serverless)
  • Self-hosted Node.js: BullMQ wins (cheapest, most control)
  • Cost at scale: BullMQ (Redis $20/mo) < Inngest ($10/mo) ≈ Trigger.dev ($10/mo)

Downloads

PackageWeekly DownloadsTrend
bullmq~1M↑ Growing
inngest~300K↑ Fast growing
@trigger.dev/sdk~150K↑ Fast growing

BullMQ: The Redis Standard

BullMQ is the mature, battle-tested choice for self-hosted Node.js applications. It's the successor to the original Bull library, rebuilt with TypeScript and a cleaner API. The architecture is the classic producer-consumer pattern backed by Redis: producers add jobs to named queues, workers pull jobs from those queues and execute them, and Redis holds the job state, payloads, and completion/failure records persistently.

The Redis dependency is both BullMQ's greatest strength and its main limitation. Redis provides atomic queue operations, meaning BullMQ has strong at-least-once delivery guarantees — a job won't be lost if a worker crashes mid-execution, because Redis keeps the job in an "active" state until the worker acknowledges completion. But Redis also means you need a Redis instance: in production, that's a managed service (Railway, Upstash, ElastiCache) or a self-hosted server, with associated cost and operational overhead.

BullMQ's ecosystem depth is unmatched. BullBoard provides a React-based admin UI for monitoring queues, viewing job details, and retrying failed jobs. The library supports parent-child job dependencies, rate limiting per queue, concurrency control per worker, and flow-based job orchestration. For high-volume scenarios — millions of jobs per day — BullMQ is the right tool: it's been load-tested at scale and the community has solutions for every operational pattern you'll encounter.

npm install bullmq ioredis
# Requires Redis: docker run -d -p 6379:6379 redis:alpine
// Queue producer:
import { Queue } from 'bullmq';
import { Redis } from 'ioredis';

const connection = new Redis(process.env.REDIS_URL!);

export const emailQueue = new Queue('email', { connection });
export const reportQueue = new Queue('reports', { connection });

// Add a job:
await emailQueue.add('welcome-email', {
  userId: user.id,
  email: user.email,
  name: user.name,
}, {
  attempts: 3,
  backoff: { type: 'exponential', delay: 2000 },
  removeOnComplete: 100,     // Keep last 100 completed jobs
  removeOnFail: 200,         // Keep last 200 failed jobs
});
// Worker (separate process or same process):
import { Worker } from 'bullmq';

const emailWorker = new Worker('email', async (job) => {
  const { userId, email, name } = job.data;
  
  console.log(`Processing job ${job.id}: ${job.name}`);
  
  await job.updateProgress(10);
  
  switch (job.name) {
    case 'welcome-email':
      await sendWelcomeEmail({ email, name });
      break;
    case 'password-reset':
      await sendPasswordReset({ email, token: job.data.token });
      break;
    default:
      throw new Error(`Unknown job type: ${job.name}`);
  }
  
  await job.updateProgress(100);
  return { sent: true, timestamp: new Date() };
  
}, {
  connection,
  concurrency: 5,  // Process 5 jobs simultaneously
  limiter: {
    max: 10,      // Max 10 jobs per...
    duration: 60000,  // ...minute (rate limiting)
  },
});

emailWorker.on('failed', (job, err) => {
  console.error(`Job ${job?.id} failed: ${err.message}`);
});

BullMQ Scheduling

import { QueueScheduler } from 'bullmq';

// Recurring jobs:
await reportQueue.add('weekly-report', { type: 'weekly' }, {
  repeat: {
    pattern: '0 9 * * 1',  // Every Monday 9am (cron)
  },
});

// Delayed job (send in 30 minutes):
await emailQueue.add('follow-up', { userId }, {
  delay: 30 * 60 * 1000,
});

Inngest: Serverless-First

Inngest solves the serverless background job problem with a fundamentally different architecture. Instead of a queue that your workers poll, Inngest acts as an event broker and orchestrator that calls your functions over HTTP when it needs them to run. Your application registers functions with Inngest by exposing an HTTP endpoint (the serve() handler), and Inngest calls that endpoint to run individual steps of your functions. Each step's result is checkpointed in Inngest's infrastructure, so if your function is interrupted or fails, Inngest resumes it from the last successful step.

This step-based durability is Inngest's signature feature. The step.run() wrapper makes a function step durable: if it succeeds, the result is stored and the step won't re-execute on retry. If it fails, only that step retries. This is dramatically more efficient and correct than retrying an entire job from scratch, especially for multi-step workflows that call external APIs. The step.sleep() primitive lets your function genuinely wait days or weeks without holding any resources — Inngest simply pauses the function and re-invokes it when the wait time expires.

For Vercel, Netlify, and other serverless platforms, Inngest is often the best-fit solution. There's no Redis to provision, no worker process to keep running, and no infrastructure to manage. The free tier (3 million events per month) is sufficient for most early-stage applications, and the local development experience uses the inngest-cli dev server to simulate the full orchestration layer on your machine.

npm install inngest

# Local dev server (simulates Inngest cloud):
npx inngest-cli@latest dev
// lib/inngest.ts:
import { Inngest } from 'inngest';

export const inngest = new Inngest({ id: 'my-saas' });
// Define durable functions — each `step.run` is retried independently:
import { inngest } from '@/lib/inngest';

export const processSignup = inngest.createFunction(
  { id: 'process-signup' },
  { event: 'user/signed-up' },
  
  async ({ event, step }) => {
    const { userId, email, name } = event.data;
    
    // Each step is retried independently if it fails:
    const user = await step.run('create-user-profile', async () => {
      return await db.profile.create({ data: { userId, bio: '' } });
    });
    
    await step.run('send-welcome-email', async () => {
      return await sendWelcomeEmail({ email, name });
    });
    
    // Wait then send follow-up (no cron needed):
    await step.sleep('wait-7-days', '7d');
    
    await step.run('send-onboarding-email', async () => {
      const userData = await db.user.findUnique({ where: { id: userId } });
      if (!userData?.hasCompletedSetup) {
        await sendOnboardingEmail({ email, name });
      }
    });
    
    return { userId, emailsSent: 2 };
  }
);

// Fan-out pattern — run multiple things in parallel:
export const processOrderAsync = inngest.createFunction(
  { id: 'process-order' },
  { event: 'order/placed' },
  
  async ({ event, step }) => {
    const [shipment, invoice, notification] = await Promise.all([
      step.run('create-shipment', () => createShipment(event.data.orderId)),
      step.run('generate-invoice', () => generateInvoice(event.data.orderId)),
      step.run('notify-customer', () => sendOrderConfirmation(event.data)),
    ]);
    
    return { shipment, invoice, notification };
  }
);
// Serve Inngest in Next.js App Router:
// app/api/inngest/route.ts:
import { serve } from 'inngest/next';
import { inngest } from '@/lib/inngest';
import { processSignup, processOrderAsync } from '@/lib/functions';

export const { GET, POST, PUT } = serve({
  client: inngest,
  functions: [processSignup, processOrderAsync],
});

// Trigger an event from anywhere:
await inngest.send({ name: 'user/signed-up', data: { userId, email, name } });

Trigger.dev: Cloud-Managed Jobs

Trigger.dev positions itself between BullMQ's self-hosted control and Inngest's event-centric model. Like Inngest, it runs jobs on your server and handles orchestration externally. Unlike Inngest, its v3 SDK runs tasks in long-lived workers (not short-lived serverless functions), which means it can handle tasks that run for minutes without the timeout constraints of serverless platforms. This makes Trigger.dev a strong choice for AI workloads, video processing, PDF generation, and other CPU or I/O intensive tasks that don't fit in a 10-second serverless execution window.

The dashboard is Trigger.dev's standout differentiator. It provides real-time visibility into every task run: the payload it received, each step's output, execution timeline, logs, and retry history. For teams where debugging job failures is a recurring pain point, this visibility is worth significant developer time savings. The dashboard is available on Trigger.dev's cloud, and for enterprise users it can be self-hosted.

The v3 SDK is a significant rewrite from earlier versions, with a cleaner API that's closer to Inngest's step-based model. The wait.for() primitive handles time delays similarly to Inngest's step.sleep(). Batch triggering allows submitting hundreds of job payloads in a single API call, which is important for fan-out scenarios like processing uploaded data files. Trigger.dev also provides a managed compute environment (Trigger.dev Cloud workers) so you don't need to host worker processes yourself — your task code is deployed to Trigger.dev's infrastructure, not just coordinated by it. This makes it the most fully managed option of the three.

npm install @trigger.dev/sdk @trigger.dev/nextjs
npx trigger.dev@latest init  # Sets up project on Trigger.dev cloud
// trigger/process-signup.ts:
import { task, wait } from '@trigger.dev/sdk/v3';

export const processSignupTask = task({
  id: 'process-signup',
  maxDuration: 300,  // 5 minutes max
  
  run: async (payload: { userId: string; email: string; name: string }) => {
    const { userId, email, name } = payload;
    
    // Create profile:
    await db.profile.create({ data: { userId, bio: '' } });
    
    // Send welcome email:
    await sendWelcomeEmail({ email, name });
    
    // Wait 7 days then send follow-up:
    await wait.for({ days: 7 });
    
    const userData = await db.user.findUnique({ where: { id: userId } });
    if (!userData?.hasCompletedSetup) {
      await sendOnboardingEmail({ email, name });
    }
    
    return { emailsSent: 2 };
  },
});

// Trigger from anywhere:
await processSignupTask.trigger({ userId, email, name });

// Batch trigger:
await processSignupTask.batchTrigger([
  { payload: { userId: '1', email: 'a@example.com', name: 'Alice' } },
  { payload: { userId: '2', email: 'b@example.com', name: 'Bob' } },
]);

Trigger.dev Dashboard

The standout feature: a real-time job dashboard showing every run, its status, inputs, outputs, logs, and retry history. Available on Trigger.dev cloud or self-hosted.


Migrating Between Libraries

A common path is starting with BullMQ on a self-hosted server, then needing to move to Inngest or Trigger.dev when migrating to a serverless deployment. The conceptual migration is straightforward, but it requires rethinking how you structure your jobs.

BullMQ's producer-consumer model maps to Inngest's event model: instead of queue.add('job-name', payload), you call inngest.send({ name: 'event-name', data: payload }). Instead of a Worker that processes jobs, you define functions with inngest.createFunction(). The main structural change is adopting step-based organization: the code that was in your BullMQ worker handler gets split into discrete step.run() calls, each with a unique name. This feels like more ceremony initially, but provides the step-level retry behavior that makes Inngest work correctly on serverless platforms.

For teams moving from a simpler queue setup (like bull, BullMQ's predecessor, or bee-queue) to BullMQ, the migration is largely mechanical — the BullMQ API is intentionally similar to Bull's, with TypeScript types and some API cleanup. Most Bull code runs with minor modifications after installing BullMQ.

Managing Redis Costs at Scale

If you choose BullMQ, Redis cost management matters at higher job volumes. The default BullMQ configuration keeps job records indefinitely, which can consume significant Redis memory over time. The removeOnComplete and removeOnFail options control retention — keeping the last 100-200 completed jobs gives you enough visibility for debugging without unbounded storage growth.

Upstash is particularly popular for BullMQ in lower-volume applications: it's a serverless Redis with a per-request pricing model, free for the first 10,000 requests per day. For higher-volume scenarios (millions of jobs per day), a dedicated Redis instance on Railway or a managed ElastiCache cluster typically provides better performance at lower cost than per-request pricing.

For Inngest and Trigger.dev, pricing is based on event/run volume rather than infrastructure. The math typically favors self-hosted Redis via BullMQ at very high volumes (millions of jobs per day), while Inngest and Trigger.dev's managed pricing is competitive at lower and mid volumes where the operational simplicity has meaningful value.

Comparison Table

BullMQInngestTrigger.dev
InfrastructureRedis requiredServerless (your server)Cloud managed
Vercel compatible❌ (needs Redis)
Self-hosted✅ (partial)✅ (paid)
Step-based durabilityManualstep.run()wait.for()
DashboardBullBoard (DIY)✅ (excellent)
Scheduling✅ cron
Free tierSelf-hosted3M events/mo50K runs/mo
Paid pricing~$20/mo (Redis)$10/mo$10/mo
MaturityHigh (2021)MediumGrowing

Error Handling and Observability in Production

How you handle failures is as important as how you process jobs. All three libraries support retry with configurable backoff, but they differ in how they surface failures and what happens when a job exhausts its retry budget.

BullMQ stores failed jobs in a "failed" state in Redis, where they remain visible in BullBoard until you manually retry or remove them. You control the retention policy with removeOnFail — keeping the last N failed jobs gives you a debugging window without unbounded storage growth. The failedReason and stacktrace fields on the job object let you query failures programmatically.

Inngest's event-driven architecture means failed functions appear in the Inngest dashboard with full step-level detail. When a step fails, Inngest marks the function's run as "failed" and surfaces the error and the step that failed. For dead-letter-queue equivalents, Inngest supports failure handlers: you can define a function that triggers on another function's failure, letting you send alerts, log to an external system, or create compensating actions.

Trigger.dev similarly surfaces failures in its dashboard with full run history. The v3 SDK supports onFailure handlers at the task level. For integration with external alerting (PagerDuty, Slack, error tracking services), each library either has native integrations or can call external APIs from failure handlers.

The practical recommendation: set up alerting for any job that exhausts its retry budget. A job that's silently sitting in a dead-letter queue is operationally invisible until someone checks the dashboard manually. Use failure handlers to push critical failures into your existing observability pipeline — your existing on-call stack should know when background jobs fail in production, not just when HTTP requests return errors.

Decision Guide

Use BullMQ if:
  → Self-hosted Node.js server (not serverless)
  → Already have Redis
  → Maximum control and portability
  → High-volume queuing (millions of jobs/day)
  → Cost-sensitive at scale

Use Inngest if:
  → Vercel or serverless deployment
  → Long-running workflows (sleep, wait)
  → Fan-out/orchestration patterns
  → Want step-level retry without Redis

Use Trigger.dev if:
  → Want best-in-class observability dashboard
  → Team needs job history and debugging
  → Managed infrastructure (no Redis to maintain)
  → Starting fresh with complex async workflows

Compare BullMQ, Inngest, and Trigger.dev on PkgPulse.

See also: BullMQ vs Inngest vs Trigger.dev 2026: Node.js Jobs Compared and tsx vs ts-node vs Bun: Running TypeScript Directly 2026, Model Context Protocol (MCP) Libraries for Node.js 2026.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.