Inngest vs Trigger.dev v3 vs QStash: Serverless Jobs 2026
TL;DR
Serverless functions time out — but many tasks take minutes or hours. Background job frameworks handle long-running work, retries, and scheduled tasks outside the request lifecycle. Inngest is the event-driven step functions platform — define multi-step workflows in TypeScript, each step is retried independently, local dev server included, works in any serverless environment. Trigger.dev v3 is the TypeScript background jobs framework — deploys workers to their own infrastructure, supports realtime progress updates, and lets you write long-running tasks as plain async TypeScript functions. Upstash QStash is the HTTP-based message queue — the most lightweight option, sends HTTP requests to your endpoints with guaranteed delivery and scheduling, ideal for serverless apps already on Upstash. For complex multi-step workflows with local dev: Inngest. For long-running background tasks that need dedicated worker infrastructure: Trigger.dev. For simple scheduled HTTP calls and queue-based webhooks: QStash.
Key Takeaways
- Inngest has a local dev server —
npx inngest-cli@latest devmirrors production locally - Trigger.dev v3 deploys workers — your code runs in isolated containers, not on your app server
- QStash is HTTP-native — sends POST requests to any URL; no SDK required on the receiver
- Inngest supports steps — each step is independently retried without replaying previous steps
- Trigger.dev supports realtime — subscribe to job progress via
useRealtimeRunReact hook - QStash has dead letter queues — failed messages go to a DLQ for inspection
- All three integrate with Next.js App Router — route handlers as endpoints
The Serverless Job Problem
Serverless function limits:
Vercel (Hobby): 10 second timeout
Vercel (Pro): 300 second timeout
Lambda: 15 minute max
Jobs that break these limits:
- Send 1,000 emails (slow SMTP)
- Process large file upload (resize 50 images)
- Generate AI report (LLM call chain)
- Import 10,000 CSV rows into database
- Send webhooks with retries
Solution: Background job frameworks
Inngest / Trigger.dev: Long-running + retries + steps
QStash: Queue-based + scheduled HTTP
BullMQ: Redis-backed (requires always-on server)
Inngest: Event-Driven Step Functions
Inngest routes events to functions that run step-by-step. Each step is checkpointed — if a step fails, only that step retries (not the entire function from the start).
Installation
npm install inngest
Basic Setup (Next.js App Router)
// app/api/inngest/route.ts
import { serve } from "inngest/next";
import { inngest } from "@/inngest/client";
import { sendWelcomeEmail } from "@/inngest/functions/send-welcome-email";
import { processUpload } from "@/inngest/functions/process-upload";
import { generateReport } from "@/inngest/functions/generate-report";
export const { GET, POST, PUT } = serve({
client: inngest,
functions: [sendWelcomeEmail, processUpload, generateReport],
});
// inngest/client.ts
import { Inngest } from "inngest";
export const inngest = new Inngest({ id: "my-app" });
Defining Functions with Steps
// inngest/functions/send-welcome-email.ts
import { inngest } from "@/inngest/client";
import { sendEmail } from "@/lib/email";
import { db } from "@/lib/db";
export const sendWelcomeEmail = inngest.createFunction(
{
id: "send-welcome-email",
name: "Send Welcome Email",
retries: 3,
throttle: {
count: 100,
period: "1m", // Max 100 per minute
},
},
{ event: "user/signed-up" },
async ({ event, step }) => {
const { userId, email, name } = event.data;
// Step 1: Fetch user preferences (retried independently if fails)
const preferences = await step.run("fetch-preferences", async () => {
return db.userPreferences.findFirst({ where: { userId } });
});
// Step 2: Send welcome email
await step.run("send-email", async () => {
await sendEmail({
to: email,
template: "welcome",
data: {
name,
language: preferences?.language ?? "en",
},
});
});
// Step 3: Wait 3 days, then send onboarding tips
await step.sleep("wait-for-onboarding", "3d");
await step.run("send-onboarding-tips", async () => {
const user = await db.users.findUnique({ where: { id: userId } });
if (user?.completedOnboarding) return; // Skip if already done
await sendEmail({
to: email,
template: "onboarding-tips",
data: { name },
});
});
}
);
Triggering Events
// In your API route or server action
import { inngest } from "@/inngest/client";
// Trigger after user registration
await inngest.send({
name: "user/signed-up",
data: {
userId: user.id,
email: user.email,
name: user.name,
},
});
// Trigger a bulk processing job
await inngest.send({
name: "import/csv-uploaded",
data: {
fileUrl: uploadedFileUrl,
userId: user.id,
rowCount: 5000,
},
});
Multi-Step AI Workflow
export const generateReport = inngest.createFunction(
{ id: "generate-ai-report", retries: 2, timeout: "30m" },
{ event: "report/requested" },
async ({ event, step }) => {
const { reportId, userId, topic } = event.data;
// Step 1: Gather data (web scraping / database queries)
const rawData = await step.run("gather-data", async () => {
return await gatherTopicData(topic);
});
// Step 2: Generate with LLM (separate retry scope)
const report = await step.run("generate-with-llm", async () => {
return await generateWithClaude({
prompt: buildReportPrompt(topic, rawData),
maxTokens: 4000,
});
});
// Step 3: Save and notify
await step.run("save-report", async () => {
await db.reports.update({
where: { id: reportId },
data: { content: report, status: "complete" },
});
await notifyUser(userId, reportId);
});
return { reportId, wordCount: report.split(" ").length };
}
);
Local Development
# Start the Inngest dev server
npx inngest-cli@latest dev
# Opens UI at http://localhost:8288
# All events and function runs visible locally
# No cloud connection needed for development
Trigger.dev v3: Background Jobs with Worker Infrastructure
Trigger.dev v3 deploys your tasks to dedicated worker processes — not running inside your Next.js server. This means no timeout limits, isolated execution, and realtime progress tracking.
Installation
npm install @trigger.dev/sdk@v3
npx trigger.dev@latest init
Defining Tasks
// trigger/send-welcome.ts
import { task, logger } from "@trigger.dev/sdk/v3";
import { sendEmail } from "@/lib/email";
import { db } from "@/lib/db";
export const sendWelcomeEmailTask = task({
id: "send-welcome-email",
retry: {
maxAttempts: 3,
factor: 2,
minTimeoutInMs: 1000,
maxTimeoutInMs: 30000,
},
run: async (payload: { userId: string; email: string; name: string }) => {
logger.info("Sending welcome email", { userId: payload.userId });
// No step system — just plain async code
// Trigger.dev handles timeouts and retries at the task level
const preferences = await db.userPreferences.findFirst({
where: { userId: payload.userId },
});
await sendEmail({
to: payload.email,
template: "welcome",
data: { name: payload.name },
});
logger.info("Welcome email sent");
return { emailId: `email_${Date.now()}` };
},
});
Triggering Tasks
// In your API route
import { sendWelcomeEmailTask } from "@/trigger/send-welcome";
// Trigger a task (fire and forget)
const handle = await sendWelcomeEmailTask.trigger({
userId: user.id,
email: user.email,
name: user.name,
});
// Trigger and wait for result (if within a task)
const result = await sendWelcomeEmailTask.triggerAndWait({
userId: user.id,
email: user.email,
name: user.name,
});
Realtime Progress Updates
// trigger/process-import.ts
import { task, logger, metadata } from "@trigger.dev/sdk/v3";
export const processImportTask = task({
id: "process-csv-import",
machine: { preset: "medium-2x" }, // More CPU/RAM for heavy processing
run: async (payload: { fileUrl: string; rowCount: number }) => {
const rows = await downloadAndParseCsv(payload.fileUrl);
for (let i = 0; i < rows.length; i++) {
await processRow(rows[i]);
// Update progress metadata — visible in realtime
if (i % 100 === 0) {
metadata.set("progress", {
processed: i,
total: rows.length,
percentage: Math.round((i / rows.length) * 100),
});
}
}
return { processedRows: rows.length };
},
});
// React component — realtime progress updates
import { useRealtimeRun } from "@trigger.dev/react-hooks";
function ImportProgress({ runId }: { runId: string }) {
const { run } = useRealtimeRun(runId);
const progress = run?.metadata?.progress;
return (
<div>
<p>Status: {run?.status}</p>
{progress && (
<ProgressBar
value={progress.percentage}
label={`${progress.processed} / ${progress.total} rows`}
/>
)}
</div>
);
}
Upstash QStash: HTTP Message Queue
QStash is the lightest option — it sends HTTP POST requests to your endpoints on a schedule or via queue, with guaranteed delivery, retries, and dead letter queues.
Installation
npm install @upstash/qstash
Publishing Messages
import { Client } from "@upstash/qstash";
const qstash = new Client({ token: process.env.QSTASH_TOKEN! });
// Queue a message — QStash calls your endpoint
const result = await qstash.publish({
url: "https://yourapp.com/api/jobs/send-email",
body: JSON.stringify({
userId: user.id,
email: user.email,
template: "welcome",
}),
headers: {
"Content-Type": "application/json",
},
retries: 3,
delay: "5s", // Delay delivery by 5 seconds
});
console.log("Message ID:", result.messageId);
Scheduled Jobs (Cron)
// Create a recurring scheduled job
const schedule = await qstash.schedules.create({
destination: "https://yourapp.com/api/jobs/daily-digest",
cron: "0 9 * * *", // Every day at 9 AM UTC
body: JSON.stringify({ type: "daily_digest" }),
headers: { "Content-Type": "application/json" },
retries: 2,
});
console.log("Schedule ID:", schedule.scheduleId);
Receiving and Verifying Messages
// api/jobs/send-email/route.ts
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs";
import { NextRequest, NextResponse } from "next/server";
async function handler(req: NextRequest) {
const body = await req.json();
const { userId, email, template } = body;
await sendEmail({ to: email, template });
return NextResponse.json({ success: true });
}
// Wraps handler with signature verification
export const POST = verifySignatureAppRouter(handler);
URL Groups (Fan-out)
// Send one message to multiple endpoints simultaneously
await qstash.publish({
urlGroup: "notifications", // Pre-configured group
body: JSON.stringify({ event: "new_signup", userId: user.id }),
});
// All endpoints in the group receive the message
// Good for: audit logging + notifications + analytics all at once
Feature Comparison
| Feature | Inngest | Trigger.dev v3 | QStash |
|---|---|---|---|
| Step functions | ✅ | ❌ (task-level only) | ❌ |
| Worker infrastructure | On your servers | ✅ Dedicated | ❌ (HTTP endpoints) |
| Local dev server | ✅ | ✅ | ❌ |
| Realtime progress | ❌ | ✅ | ❌ |
| Cron scheduling | ✅ | ✅ | ✅ |
| Fan-out (URL groups) | ❌ | ❌ | ✅ |
| DLQ | ✅ | ✅ | ✅ |
| Max job duration | Hours | Unlimited | 30 min |
| Cold start | Minimal | Container spin-up | None (HTTP) |
| Requires SDK | ✅ | ✅ | ❌ (receiver optional) |
| Self-hostable | ❌ (Cloud) | ✅ v3 open-source | ❌ (Upstash only) |
| Free tier | 50k runs/month | 5k runs/month | 500 messages/day |
When to Use Each
Choose Inngest if:
- Multi-step workflows with independent retry scoping are needed
- Local development environment that mirrors production is important
- Event-driven fan-out (one event triggers multiple functions) is useful
- Existing Next.js / serverless architecture without adding new infrastructure
Choose Trigger.dev v3 if:
- Tasks genuinely need hours to complete (Inngest has practical limits)
- Realtime progress updates to the UI during long jobs are required
- Isolated worker environments (separate from app server CPU/memory) matter
- Open-source self-hosted deployment is a requirement
Choose QStash if:
- You're already on Upstash (Redis or Kafka) and want a unified stack
- Simple HTTP-based queue with no SDK on the job processor is preferred
- Scheduled HTTP calls (cron webhooks) to external services are the main use case
- Minimal latency and minimal infrastructure complexity are priorities
Idempotency and At-Least-Once Delivery Guarantees
All three platforms deliver events with at-least-once semantics — if your endpoint fails to return a 2xx response, the platform retries the delivery. This means your job handlers must be idempotent: processing the same event twice must produce the same result as processing it once. For Inngest, each step call within a function is checkpointed — if a function fails midway through and retries, previously completed steps are replayed from their cached results rather than re-executing. This dramatically reduces the idempotency burden: only the currently failing step needs to be idempotent, not the entire function. For Trigger.dev tasks, the entire task function re-executes on retry, so every database write and external API call within the task must be idempotent. A common pattern is to use a unique idempotencyKey on your database upsert operations — derived from the job payload's stable identifier (order ID, user ID, event timestamp) — so that duplicate task executions result in a no-op rather than a duplicate record. QStash's message IDs (messageId in the publish response) can be stored and checked on receipt to detect duplicates before processing.
Local Development Experience and Debugging
The local development experience differs significantly between the platforms and is a practical factor in day-to-day productivity. Inngest's local dev server (npx inngest-cli dev) runs at localhost:8288 and provides a full web UI showing every event received, function run triggered, step execution timeline, and output at each step — all without connecting to Inngest's cloud. This local visibility into async job execution is transformative for debugging complex multi-step workflows where console logs buried in server output would otherwise be the only diagnostic tool. Trigger.dev v3's local mode uses npx trigger.dev@latest dev to start a local worker connected to their cloud, providing similar visibility through the Trigger.dev dashboard. QStash has no local development mode — it requires your endpoint to be publicly accessible (via a tunnel like ngrok or Cloudflare Tunnel) to receive messages during development, which adds friction to local iteration. For teams building complex job pipelines, Inngest's offline-capable local dev server is a meaningful productivity advantage.
Self-Hosting Considerations and Vendor Lock-In
Teams with strict data residency requirements or compliance constraints may need to self-host their job queue infrastructure. Trigger.dev v3 is fully open-source (MIT licensed) and provides official Docker Compose configuration for self-hosting the worker infrastructure, the dashboard, and the task queue database — you can run the entire stack on your own servers without sending any job payload data to Trigger.dev's cloud. Inngest does not offer self-hosting as of 2026 — all execution routes through Inngest's cloud, which means job payload data leaves your infrastructure. QStash is a proprietary Upstash service with no self-hosting option. For regulated industries (healthcare, finance, government) where job payloads may contain sensitive data, Trigger.dev's self-hosting capability is a blocking differentiator. For most SaaS products without strict data residency requirements, the operational overhead of self-hosting — maintaining the database, worker fleet, and dashboard — is not worth the control, and the managed services are the pragmatic choice.
Cost Modeling at Production Scale
Job queue costs scale with run volume and execution time in ways that are important to model before committing to a platform. Inngest's free tier covers 50,000 function runs per month — sufficient for early-stage products. Their Growth plan charges based on step executions rather than function runs, which can be surprising for functions with many steps. A function with 10 steps that runs 10,000 times consumes 100,000 step executions. Trigger.dev's pricing scales with compute time on their managed workers — the machine preset you select (small, medium, large) determines the per-second compute rate. For CPU-intensive jobs like image processing, the managed worker compute cost adds up quickly; self-hosting on your own compute may be more economical at high volume. QStash's pricing is message-based — each published message and each delivery attempt counts toward your quota. For simple cron jobs sending one message per run, QStash is extremely affordable. For high-frequency fan-out patterns (one event triggers 100 messages), QStash costs scale linearly with fan-out factor. Model your specific workload against each provider's pricing calculator using realistic monthly run projections before signing annual contracts.
Methodology
Data sourced from official Inngest documentation (inngest.com/docs), Trigger.dev v3 documentation (trigger.dev/docs), Upstash QStash documentation (upstash.com/docs/qstash), GitHub star counts as of February 2026, npm download statistics, and community discussions from the Inngest Discord, Trigger.dev Discord, and r/nextjs.
Related: BullMQ vs bee-queue vs pg-boss for Redis-backed job queues that run on traditional servers, or Temporal vs Restate vs Windmill for enterprise-grade durable workflow orchestration.
See also: Vercel AI SDK vs OpenAI vs Anthropic SDK 2026 and unenv vs edge-runtime vs @cloudflare/workers-types