Hatchet vs Trigger.dev v3 vs Inngest: Durable Workflows 2026
Hatchet vs Trigger.dev v3 vs Inngest: Durable Workflows in Node.js 2026
TL;DR
For durable background workflows in Node.js, Inngest wins for teams wanting the fastest zero-config cloud integration, Trigger.dev v3 wins for open-source self-hosting with the best DX, and Hatchet wins for complex AI task orchestration with fine-grained concurrency control. All three are dramatically simpler than running your own BullMQ + Redis infrastructure for serverless or edge-heavy workloads.
Key Takeaways
- Trigger.dev v3 uses Bun-based workers that can run jobs for hours without cold-start penalties — solves the fundamental serverless timeout problem
- Inngest has the simplest integration story (works in any serverless function with one import) and first-class Vercel/Netlify support
- Hatchet was purpose-built for AI pipelines: supports DAG-based task graphs, streaming step outputs, and priority lanes for different job classes
- BullMQ/Redis is still the right choice for high-volume simple queues (1M+ jobs/day) where self-hosted infrastructure is not a constraint
- All three offer free tiers, but production pricing diverges significantly above 100K executions/month
- Self-hosting: Trigger.dev v3 has the most mature self-hosted path; Hatchet and Inngest are cloud-first
The Problem with Traditional Job Queues in 2026
BullMQ and Redis have served the Node.js community well. For high-volume, simple background jobs (sending emails, resizing images, updating search indexes), they remain the right tool. But a new category of workloads has emerged that traditional queues handle poorly:
Long-running AI tasks: Generating a 10,000-word document with GPT-4o, running a multi-step code analysis pipeline, or processing a PDF through several ML models can take minutes — far beyond serverless function limits.
Multi-step workflows with dependencies: A job that fetches data, transforms it, calls three APIs, waits for a webhook, then sends a notification isn't a "job" — it's a workflow. Encoding these as separate BullMQ queues with manual dependency management is fragile and hard to observe.
Durable execution across restarts: Traditional queues lose in-progress state on worker restart. Durable workflow systems persist state between steps, so a 2-hour job can resume after a deployment or crash.
Hatchet, Trigger.dev v3, and Inngest all target this new category.
Trigger.dev v3: Open-Source Durable Jobs with Bun Workers
npm: @trigger.dev/sdk | weekly downloads: ~45K | latest: 3.x | self-hostable: ✅
Trigger.dev v3 (released late 2024) is a major architectural rewrite. The killer feature: tasks run in long-lived Bun workers, not serverless functions. This means:
- No cold starts after the first call
- Tasks can run for hours without timeouts
- Pause and resume between steps persist to durable storage
npm install @trigger.dev/sdk@v3
Defining a task:
// trigger/process-document.ts
import { task, wait } from '@trigger.dev/sdk/v3'
import { analyzeDocument, summarizeChunks, generateReport } from '../lib/ai'
export const processDocumentTask = task({
id: 'process-document',
maxDuration: 3600, // 1 hour max (no serverless timeout!)
retry: {
maxAttempts: 3,
minTimeoutInMs: 1000,
factor: 2,
},
run: async (payload: { documentUrl: string; userId: string }) => {
// Step 1: Analyze document
const analysis = await analyzeDocument(payload.documentUrl)
// Step 2: Wait can be triggered externally (webhook pattern)
const humanApproval = await wait.forEvent('document-approved', {
timeout: { hours: 24 },
filter: { documentId: analysis.id },
})
// Step 3: Generate report (can take minutes — no timeout issue)
const report = await generateReport(analysis, humanApproval.data)
return { reportId: report.id, tokensUsed: report.tokenCount }
},
})
Triggering from your API:
import { processDocumentTask } from './trigger/process-document'
// In your Next.js API route or Express handler
const handle = await processDocumentTask.trigger({
documentUrl: req.body.url,
userId: req.user.id,
})
// Returns immediately — job runs in background
res.json({ jobId: handle.id, status: 'queued' })
Scheduled tasks (cron replacement):
import { schedules } from '@trigger.dev/sdk/v3'
export const dailyReportTask = schedules.task({
id: 'daily-report',
cron: '0 9 * * *', // Every day at 9am UTC
run: async () => {
const report = await generateDailyReport()
await sendSlackNotification(report)
},
})
Self-hosting: Trigger.dev v3 provides a Docker compose setup for self-hosting. The architecture requires PostgreSQL, Redis, and an object store (S3-compatible). More moving parts than competitors, but the most mature self-hosted path among the three.
Pricing: Free tier includes 50K task runs/month. Production: ~$30/month for 500K runs, with predictable per-run pricing above that.
Inngest: Zero-Config Serverless Workflows
npm: inngest | weekly downloads: ~85K | latest: 3.x | self-hostable: ⚠️ (cloud-first)
Inngest is the most popular of the three by npm downloads, largely because of its frictionless integration story. You can add it to an existing Next.js app in under 5 minutes without any infrastructure changes.
npm install inngest
// inngest/client.ts
import { Inngest } from 'inngest'
export const inngest = new Inngest({ id: 'my-app' })
// inngest/functions/send-welcome-email.ts
import { inngest } from '../client'
export const sendWelcomeEmail = inngest.createFunction(
{
id: 'send-welcome-email',
retries: 3,
throttle: {
count: 100,
period: '1m', // max 100 calls per minute
},
},
{ event: 'user/created' },
async ({ event, step }) => {
// Steps are durable checkpoints — each runs independently with retries
const user = await step.run('fetch-user', async () => {
return await db.users.findUnique({ where: { id: event.data.userId } })
})
await step.run('send-email', async () => {
return await resend.emails.send({
to: user.email,
subject: 'Welcome!',
react: WelcomeEmail({ name: user.name }),
})
})
// Wait for user to confirm email (webhook trigger)
await step.waitForEvent('wait-for-confirmation', {
event: 'user/email-confirmed',
match: 'data.userId',
timeout: '7d',
})
await step.run('activate-account', async () => {
await db.users.update({
where: { id: event.data.userId },
data: { status: 'active' }
})
})
}
)
Inngest's step.run() primitive is the key abstraction. Each step is independently retried, and the function pauses durably between steps. If your serverless function gets cut off mid-execution, Inngest replays from the last successful step checkpoint.
Next.js integration (the simplest possible setup):
// app/api/inngest/route.ts
import { serve } from 'inngest/next'
import { inngest } from '@/inngest/client'
import { sendWelcomeEmail } from '@/inngest/functions/send-welcome-email'
export const { GET, POST, PUT } = serve({
client: inngest,
functions: [sendWelcomeEmail],
})
That's it. Inngest's cloud polls your endpoint and delivers events. No Redis, no separate worker process, no infrastructure.
Sending events:
import { inngest } from '@/inngest/client'
// Trigger after user creation in your Next.js Server Action
await inngest.send({
name: 'user/created',
data: { userId: newUser.id, plan: 'free' }
})
Fan-out pattern (useful for batch AI processing):
export const processDocumentBatch = inngest.createFunction(
{ id: 'process-document-batch' },
{ event: 'documents/batch-submitted' },
async ({ event, step }) => {
// Fan out to individual document processing functions
const jobs = await step.run('create-individual-jobs', async () => {
return await Promise.all(
event.data.documentIds.map(id =>
inngest.send({ name: 'document/process', data: { id } })
)
)
})
return { queued: jobs.length }
}
)
Pricing: Free tier: 50K function runs/month. Pro: $25/month for 500K runs. Less predictable than Trigger.dev's per-run model above the Pro tier.
Hatchet: Purpose-Built for AI Task Orchestration
npm: @hatchet-run/sdk | weekly downloads: ~8K | latest: 0.x | self-hostable: ✅
Hatchet is the newest and most AI-focused of the three. Its core design philosophy is different: rather than thinking in terms of "jobs" or "functions," Hatchet models work as directed acyclic graphs (DAGs) where tasks have explicit dependencies and can pass outputs between steps.
npm install @hatchet-run/sdk dotenv
// hatchet-client.ts
import Hatchet from '@hatchet-run/sdk'
export const hatchet = await Hatchet.init()
// workflows/ai-document-pipeline.ts
import { hatchet } from '../hatchet-client'
const workflow = hatchet.workflow({
name: 'ai-document-pipeline',
on: { event: 'document:uploaded' },
})
// Step 1: Extract text
const extractText = workflow.step('extract-text', async (ctx) => {
const { documentUrl } = ctx.workflowInput()
const text = await extractTextFromPDF(documentUrl)
return { text, charCount: text.length }
})
// Step 2: Analyze sentiment (depends on step 1)
const analyzeSentiment = workflow.step('analyze-sentiment', {
parents: [extractText], // explicit dependency
}, async (ctx) => {
const { text } = ctx.stepOutput(extractText)
const sentiment = await callSentimentAPI(text)
return { sentiment }
})
// Step 3: Generate summary (depends on step 1, parallel with step 2)
const generateSummary = workflow.step('generate-summary', {
parents: [extractText],
}, async (ctx) => {
const { text } = ctx.stepOutput(extractText)
const summary = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: `Summarize: ${text}` }],
})
return { summary: summary.choices[0].message.content }
})
// Step 4: Final report (depends on both step 2 and 3)
const generateReport = workflow.step('generate-report', {
parents: [analyzeSentiment, generateSummary],
}, async (ctx) => {
const { sentiment } = ctx.stepOutput(analyzeSentiment)
const { summary } = ctx.stepOutput(generateSummary)
return { report: buildFinalReport(sentiment, summary) }
})
The DAG model means steps 2 and 3 run in parallel automatically, then step 4 waits for both. Hatchet handles the dependency resolution, parallelism, and retry semantics.
Priority lanes are Hatchet's standout feature for AI applications:
// Hatchet supports priority-based scheduling
const highPriorityJob = await hatchet.run('ai-document-pipeline', {
input: { documentUrl: premiumUserDoc },
priority: 10, // 1-10 scale — premium users get priority
})
const lowPriorityJob = await hatchet.run('ai-document-pipeline', {
input: { documentUrl: freeUserDoc },
priority: 1,
})
Streaming step outputs (useful for LLM streaming):
const streamingStep = workflow.step('stream-response', async (ctx) => {
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
stream: true,
messages: [{ role: 'user', content: ctx.workflowInput().prompt }],
})
for await (const chunk of stream) {
// Stream chunks back to the caller in real-time
await ctx.stream(chunk.choices[0]?.delta?.content || '')
}
})
Self-hosting: Hatchet provides a Helm chart for Kubernetes and Docker Compose for simpler setups. The architecture is simpler than Trigger.dev (PostgreSQL + RabbitMQ).
Side-by-Side Comparison
| Feature | Trigger.dev v3 | Inngest | Hatchet |
|---|---|---|---|
| npm downloads/week | ~45K | ~85K | ~8K |
| Architecture | Bun workers | Serverless polling | Persistent workers |
| Max job duration | Hours (no limit) | ~15 min (serverless) | Hours (no limit) |
| Setup complexity | Medium | Low | Medium |
| Self-hosting | ✅ Mature | ⚠️ Limited | ✅ Available |
| AI/LLM focus | General-purpose | General-purpose | Purpose-built |
| DAG workflows | Via task chaining | Via step chaining | Native DAG |
| Priority queues | ✅ | ⚠️ (Pro plan) | ✅ Native |
| Streaming outputs | ⚠️ Partial | ❌ | ✅ |
| Free tier | 50K runs/month | 50K runs/month | 10K runs/month |
| Open source | ✅ | ✅ | ✅ |
| Primary use case | Long-running tasks | Serverless workflows | AI pipelines |
Decision Guide
Choose Trigger.dev v3 if:
- You need jobs that run for more than 15 minutes
- You want the best self-hosting story for data residency or cost control
- You're building on Bun or want native Bun worker performance
- You prefer the open-source community and don't want cloud lock-in
Choose Inngest if:
- You're on Vercel, Netlify, or another serverless platform
- Setup time matters — you want to add background jobs in minutes
- Your jobs complete within 15 minutes
- You want the widest framework support (works with Express, Next.js, SvelteKit, Nuxt, Remix)
Choose Hatchet if:
- You're building AI pipelines with complex step dependencies
- You need priority-based scheduling (premium users vs free users)
- You want DAG-based workflow modeling with parallel step execution
- Streaming LLM outputs back to clients is a requirement
Stick with BullMQ + Redis if:
- Job volume exceeds 1M/day (cloud pricing becomes expensive)
- Your jobs are simple (no complex dependencies, < 30 seconds each)
- You already have Redis infrastructure and operations expertise
- You need sub-100ms job latency (managed service overhead is a factor)
Ecosystem and Momentum
All three projects are well-funded and actively developed as of 2026:
- Trigger.dev has 8K+ GitHub stars and raised Series A funding. v3's architectural rewrite has significantly improved production stability.
- Inngest has 6K+ GitHub stars and the most enterprise customers of the three. Strong Vercel partnership drives adoption.
- Hatchet is newer (1K+ GitHub stars) but purpose-aligned with the AI workload trend. Growing faster in AI-focused teams.
Methodology
- npm download data from npmjs.com (March 2026)
- GitHub stars from each project's repository
- Feature comparison from official documentation and changelogs
- Self-hosting documentation from each project's deployment guides
- Pricing from each company's public pricing pages (March 2026)
Building background jobs on Node.js? Compare packages like BullMQ, Inngest, and more on PkgPulse for live npm health scores and download trends.