Skip to main content

Best Serverless Frameworks for Node.js in 2026

·PkgPulse Team
0

TL;DR

SST for full-stack TypeScript on AWS; Serverless Framework for multi-cloud legacy deployments. SST (~200K weekly downloads) is the modern TypeScript-first framework that deploys to AWS with live function development, type-safe resource binding, and a growing ecosystem. Serverless Framework (~2M downloads) is the older multi-provider tool with thousands of plugins. AWS CDK (~500K) is AWS's official infrastructure-as-code with full TypeScript. For new AWS-native apps in 2026, SST is the compelling choice.

Key Takeaways

  • Serverless Framework: ~2M weekly downloads — multi-provider, 1K+ plugins, widely deployed
  • SST: ~200K downloads — TypeScript-first, live Lambda dev, resource binding, AWS-native
  • AWS CDK: ~500K downloads — low-level AWS infrastructure, TypeScript/Python/Java
  • SST v3 — Ion release uses Pulumi under the hood, faster deployments
  • Local development — SST live lambda lets you test against real AWS services instantly

Why Serverless

Serverless architecture means deploying functions that scale automatically, with no server management. You pay only when functions run (pay-per-invocation), scale to zero when idle (no charges when no traffic), and don't patch OS or runtime security vulnerabilities.

The economics are compelling for certain workloads: a startup API that handles 100K requests/month costs around $0.20 on Lambda vs $30-50/month for the smallest always-on EC2 instance. For event-driven workloads (webhooks, queue processors, scheduled tasks), serverless is almost always the right economics.

The trade-offs: cold starts (first invocation after idle may take 100-500ms), 15-minute execution limit for Lambda, and vendor lock-in. For latency-sensitive APIs or long-running processes, traditional servers or containers are often better.


Serverless Framework

# serverless.yml — multi-provider configuration
service: my-api

provider:
  name: aws
  runtime: nodejs20.x
  region: us-east-1
  stage: ${opt:stage, 'dev'}
  environment:
    DATABASE_URL: ${ssm:/myapp/${self:provider.stage}/database-url}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - dynamodb:GetItem
            - dynamodb:PutItem
            - dynamodb:DeleteItem
          Resource: !GetAtt UsersTable.Arn

functions:
  createUser:
    handler: src/users/create.handler
    events:
      - httpApi:
          path: /users
          method: POST
  getUser:
    handler: src/users/get.handler
    events:
      - httpApi:
          path: /users/{id}
          method: GET
  processQueue:
    handler: src/queue/processor.handler
    events:
      - sqs:
          arn: !GetAtt ProcessingQueue.Arn
          batchSize: 10

resources:
  Resources:
    UsersTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: ${self:service}-${self:provider.stage}-users
        BillingMode: PAY_PER_REQUEST
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH

plugins:
  - serverless-offline          # Local dev
  - serverless-esbuild          # TypeScript build
  - serverless-prune-plugin     # Clean old deployments
// src/users/create.ts — Lambda handler
import type { APIGatewayProxyHandler } from 'aws-lambda';
import { DynamoDBClient, PutItemCommand } from '@aws-sdk/client-dynamodb';
import { z } from 'zod';

const db = new DynamoDBClient({});

const schema = z.object({
  name: z.string(),
  email: z.string().email(),
});

export const handler: APIGatewayProxyHandler = async (event) => {
  try {
    const body = schema.parse(JSON.parse(event.body ?? '{}'));

    await db.send(new PutItemCommand({
      TableName: process.env.USERS_TABLE,
      Item: {
        id: { S: crypto.randomUUID() },
        name: { S: body.name },
        email: { S: body.email },
        createdAt: { S: new Date().toISOString() },
      },
    }));

    return { statusCode: 201, body: JSON.stringify({ success: true }) };
  } catch (err) {
    return { statusCode: 400, body: JSON.stringify({ error: String(err) }) };
  }
};

SST v3 (Modern AWS)

// sst.config.ts — TypeScript infrastructure
import { SSTConfig } from 'sst';
import { Api, Table, Bucket, NextjsSite, Cron } from 'sst/constructs';

export default {
  config(input) {
    return {
      name: 'my-app',
      region: 'us-east-1',
    };
  },
  stacks(app) {
    app.stack(function API({ stack }) {
      const table = new Table(stack, 'Users', {
        fields: { id: 'string', email: 'string' },
        primaryIndex: { partitionKey: 'id' },
        globalIndexes: {
          EmailIndex: { partitionKey: 'email' },
        },
      });

      const bucket = new Bucket(stack, 'Uploads');

      const api = new Api(stack, 'Api', {
        routes: {
          'POST /users': 'packages/functions/src/users/create.handler',
          'GET /users/{id}': 'packages/functions/src/users/get.handler',
          'POST /upload': 'packages/functions/src/upload.handler',
        },
        // Type-safe resource binding!
        bind: [table, bucket],
      });

      // Next.js site with SSR
      const site = new NextjsSite(stack, 'Web', {
        path: 'packages/web',
        bind: [api, table],
      });

      // Cron job
      new Cron(stack, 'Cleanup', {
        schedule: 'rate(1 day)',
        job: { function: 'packages/functions/src/cleanup.handler' },
      });

      stack.addOutputs({
        ApiUrl: api.url,
        SiteUrl: site.url,
      });
    });
  },
} satisfies SSTConfig;
// SST resource binding — type-safe, no env vars needed
// packages/functions/src/users/create.ts
import { Table } from 'sst/node/table';
import { Bucket } from 'sst/node/bucket';
import { DynamoDBClient, PutItemCommand } from '@aws-sdk/client-dynamodb';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';

// SST injects the correct table name/bucket name at runtime
// Table.Users.tableName — automatically bound
// Bucket.Uploads.bucketName — automatically bound
const db = new DynamoDBClient({});

export const handler = async (event) => {
  await db.send(new PutItemCommand({
    TableName: Table.Users.tableName,  // Type-safe, no process.env string
    Item: { /* ... */ },
  }));
};
# SST commands
npx sst dev              # Start live Lambda dev (changes deploy instantly)
npx sst deploy           # Deploy to AWS
npx sst deploy --stage prod  # Deploy to production
npx sst remove           # Tear down all resources
npx sst console          # Open SST Console (logs, DynamoDB viewer, etc.)

SST's sst dev is its standout feature. When you run it, your Lambda functions run locally but are connected to real AWS services (DynamoDB, S3, SQS, etc.). When a real request hits your deployed Lambda URL, SST tunnels it to your local machine, your code runs locally, and the response goes back. Changes you make in your editor are reflected instantly — no deployment cycle needed for development.


Cold Start Optimization

Cold starts are the main performance concern for serverless APIs:

// Techniques to reduce cold start latency

// 1. Keep Lambda warm with scheduled invocations
// In serverless.yml:
functions:
  keepWarm:
    handler: src/warmup.handler
    events:
      - schedule:
          rate: rate(5 minutes)
          input: { warmup: true }

// 2. Use Node.js bundling to reduce package size
// Large node_modules = slower cold starts
// Bundle with esbuild to reduce to a single file
// serverless-esbuild or esbuild in SST does this automatically

// 3. Use Lambda SnapStart (Java) or initialize outside handler
// Move DB connections outside the handler function:
const db = new DynamoDBClient({});  // Created once, reused across invocations

export const handler = async (event) => {
  // db is already initialized — no connection overhead
  return await db.send(/* ... */);
};

Cold start times in 2026:

  • Node.js 20 (default): 100-300ms for a simple Lambda with small bundle
  • Lambda with large dependencies (AWS SDK v2 full package): 500ms-1.5s
  • AWS SDK v3 with modular imports: 100-400ms (much smaller bundle than v2)
  • Lambda SnapStart (Java only, not Node.js): <50ms

For Node.js Lambdas serving HTTP traffic, the esbuild bundling approach (SST does this automatically) is the most effective optimization — bundle the application into a single ~1-5MB file and import only what you use from the AWS SDK.


Cost Comparison

Lambda pricing (us-east-1, arm64):
  $0.0000000133 per GB-second
  $0.20 per 1M requests

Example: 1M requests/month, 256MB, 100ms average:
  Compute: 1,000,000 × 0.256 × 0.1 × $0.0000000133 = $0.34
  Requests: 1,000,000 × $0.0000002 = $0.20
  Total: ~$0.54/month

Comparison: smallest EC2 t4g.nano: $3.07/month (with no traffic savings)

At low to medium traffic, Lambda is almost always cheaper than reserved EC2. The break-even point depends on your function duration and memory requirements — for high-traffic APIs with long-running operations, EC2 or containers can become cheaper.


When to Choose

ScenarioPick
New AWS project, TypeScriptSST
Multi-cloud (AWS + GCP + Azure)Serverless Framework
Complex AWS infrastructureAWS CDK
Legacy Serverless Framework projectKeep SF (migration not worth it)
Full-stack (Lambda + Next.js + DynamoDB)SST
Need Serverless ConsoleServerless Framework
Fine-grained AWS controlAWS CDK

Common Serverless Mistakes and How to Avoid Them

Serverless architecture introduces failure modes that don't exist in traditional server deployments. These are the ones teams encounter most often.

Storing state in Lambda's global scope between invocations. Lambda function instances are reused across invocations, but you can't rely on that. Global variables (database connections, in-memory caches) may or may not persist between calls. Use global scope for connections and clients (to benefit from reuse), but never use it as a primary data store or assume a value set in one invocation will be there in the next.

Setting memory too low. Lambda charges for GB-seconds, so teams often set memory to the minimum (128MB) to save money. This backfires: Lambda allocates CPU proportional to memory. A 256MB function gets twice the CPU of a 128MB function. For compute-bound tasks, doubling the memory can cut execution time by more than half, reducing the actual cost. Use AWS Lambda Power Tuning to find the optimal memory setting.

Not handling SQS partial batch failures. When a Lambda processes an SQS batch and one message fails, the entire batch is retried by default. This causes successful messages to be processed multiple times. Use reportBatchItemFailures to tell Lambda which specific messages failed so only those are retried.

Deploying a Lambda with all dependencies bundled. If you deploy without bundling, node_modules is included in the ZIP — potentially hundreds of megabytes. Always bundle with esbuild (tsup, SST, or esbuild directly) to produce a single file under 5MB. This dramatically reduces cold start time and deployment time.

Not setting function timeouts appropriately. Lambda's default timeout is 3 seconds. An API that calls a database and an external service might legitimately need 10-15 seconds. Set a timeout that matches the 99th-percentile expected execution time, not the average. Unexpected timeouts are silent errors — the caller gets a 504, and the function's work is abandoned mid-execution.

Using Lambda for long-running tasks. Lambda has a 15-minute maximum execution time. For tasks that might run longer — video processing, large data exports, complex AI workflows — use Step Functions for orchestration, ECS Fargate for long-running containers, or AWS Batch for compute-heavy workloads.

Middleware and Observability

Raw Lambda functions quickly become hard to maintain as your function count grows. Middleware libraries and observability tools are essential.

middy is the most popular Lambda middleware framework for Node.js. It provides a pipeline model where you compose middlewares for common concerns:

import middy from '@middy/core';
import httpJsonBodyParser from '@middy/http-json-body-parser';
import httpErrorHandler from '@middy/http-error-handler';
import inputOutputLogger from '@middy/input-output-logger';
import { validator } from '@middy/validator';
import { z } from 'zod';

const schema = z.object({
  name: z.string(),
  email: z.string().email(),
});

const baseHandler = async (event) => {
  const { name, email } = event.body; // Already parsed by middleware
  // Business logic here
  return { statusCode: 201, body: JSON.stringify({ ok: true }) };
};

export const handler = middy(baseHandler)
  .use(httpJsonBodyParser())       // Parse JSON body
  .use(inputOutputLogger())        // Log request + response
  .use(httpErrorHandler())         // Format errors as HTTP responses
  .use(validator({ eventSchema: schema })); // Validate input

AWS Lambda Powertools for TypeScript provides structured logging, distributed tracing (X-Ray), and custom metrics in a single library. It integrates directly with CloudWatch and is the AWS-recommended observability solution:

import { Logger } from '@aws-lambda-powertools/logger';
import { Tracer } from '@aws-lambda-powertools/tracer';
import { Metrics, MetricUnit } from '@aws-lambda-powertools/metrics';

const logger = new Logger({ serviceName: 'user-service' });
const tracer = new Tracer({ serviceName: 'user-service' });
const metrics = new Metrics({ namespace: 'MyApp', serviceName: 'user-service' });

export const handler = async (event) => {
  logger.info('Processing request', { path: event.path });
  metrics.addMetric('requestCount', MetricUnit.Count, 1);
  // ...
};

For teams using SST, these observability tools integrate naturally — SST's console shows logs from all your functions in a unified interface during development.

Structuring a Serverless Project for Scale

Starting with a single handler.ts file works for prototypes, but serverless projects need structure to stay maintainable as they grow.

A common pattern is to organize by domain, not by technical layer. Rather than src/routes/users.ts, src/db/users.ts, src/validators/users.ts, keep everything about users together:

packages/
  functions/
    src/
      users/
        create.ts      ← Lambda handler for POST /users
        get.ts         ← Lambda handler for GET /users/:id
        delete.ts      ← Lambda handler for DELETE /users/:id
        repository.ts  ← Database access (shared by handlers)
        schema.ts      ← Zod schemas (shared by handlers)
      orders/
        create.ts
        fulfill.ts
        repository.ts
      shared/
        db.ts          ← Database client (initialized once, reused)
        errors.ts      ← Error types

Each handler file exports a single handler function. Shared logic lives in modules imported by multiple handlers. Database clients (db.ts) are initialized at module scope (outside the handler function) so they're reused across warm invocations.

For SST projects, this structure maps directly to the bind system — each handler declares what resources it needs, and SST provides type-safe access at runtime.

FAQ

When does serverless become more expensive than a traditional server?

The break-even point depends on traffic patterns and function duration. Lambda is cheapest for bursty, unpredictable workloads. For steady, high-throughput APIs (more than ~10M requests per month with short execution times), a single EC2 instance or ECS service often becomes cheaper. The key is evaluating cost at your actual traffic level, not at startup scale.

Can I run a traditional Express.js app on Lambda?

Yes, using @vendia/serverless-express or serverless-http. These packages wrap your Express app so it can receive Lambda events instead of HTTP connections. This is a common migration path. The downside is that Express wasn't designed for Lambda's stateless model — connection pooling and middleware optimized for long-lived servers may behave differently.

How do I handle database connections in Lambda?

Use a connection pooler like RDS Proxy (for RDS databases) or PgBouncer. Direct connections from Lambda can exhaust database connection limits because each function instance opens its own connection. RDS Proxy maintains a connection pool that Lambda functions share, solving the connection exhaustion problem entirely.

What's the difference between SST v2 and SST v3 (Ion)?

SST v3, called Ion, replaces the CDK-based infrastructure layer with Pulumi. This brings faster deployments (no CloudFormation's 15-minute update cycles for simple changes), more predictable behavior, and access to Pulumi's broader provider ecosystem. The component API is similar but not identical — migrating from v2 to v3 requires updating your sst.config.ts. New projects should start with v3.

Is Serverless Framework still worth using in 2026?

For existing projects that have invested in the Serverless Framework ecosystem (plugins, CI pipelines, team knowledge), continuing to use it is reasonable. The tool is mature and well-maintained. For new projects targeting AWS specifically, SST offers a significantly better development experience. For multi-cloud deployments (AWS + GCP + Azure), Serverless Framework remains the most complete option.

CI/CD for Serverless Projects

Deploying serverless applications from CI requires a few considerations that differ from traditional server deployments.

Environment-specific deployments. Serverless applications typically have separate stacks for development, staging, and production. Both SST and Serverless Framework use a stage concept for this. Your CI pipeline should deploy to dev on every PR merge, to staging on every merge to main, and to production only on explicit release tags.

AWS credentials in CI. Never commit AWS credentials to your repository. Use environment variables or, better, AWS OIDC federation with GitHub Actions to get temporary credentials without storing long-lived access keys. GitHub Actions has native AWS OIDC support that lets you assume an IAM role directly from the CI environment:

# .github/workflows/deploy.yml
name: Deploy
on:
  push:
    branches: [main]

permissions:
  id-token: write  # Required for OIDC
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials via OIDC
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789:role/GitHubActionsDeployRole
          aws-region: us-east-1

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - run: npm ci

      - name: Deploy to staging
        run: npx sst deploy --stage staging
        env:
          NODE_ENV: production

Avoiding deployment conflicts. When multiple developers push simultaneously, two CI jobs may try to deploy to the same stage at the same time. Both SST and Serverless Framework maintain lock state during deployments (via CloudFormation for SST, via a DynamoDB lock table for Serverless Framework). Concurrent deploys to the same stage will queue or fail — build your CI to handle this gracefully with retry logic.

Testing before deployment. Run unit tests before deployment. For integration tests that require live AWS services, consider using a personal dev stage per developer (sst deploy --stage dev-$GITHUB_ACTOR) rather than sharing a dev environment, which avoids test interference between concurrent PRs.

Security Considerations for Lambda

Serverless doesn't eliminate security concerns — it shifts them. These are the most important security practices for Lambda-based APIs.

Least-privilege IAM roles. Every Lambda function runs with an IAM role. The default is often overly permissive (full DynamoDB access, full S3 access). Define per-function roles that grant only the specific actions and resources each function needs. SST makes this precise: each function's bind array determines what resources it can access, and SST generates the corresponding IAM policy automatically. With raw Serverless Framework, define iam.role.statements per function (not at the provider level) for the least-privilege model.

Never store secrets in environment variables directly. Passing a database password or API key as a Lambda environment variable means it's visible in the AWS console and in Lambda's configuration API — readable by anyone with Lambda read access to that account. Instead, store secrets in AWS Secrets Manager or SSM Parameter Store and fetch them at cold start:

// Fetch secret once at module initialization (not per-invocation)
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

const secrets = new SecretsManagerClient({});
let apiKey: string;

async function getApiKey(): Promise<string> {
  if (apiKey) return apiKey;
  const response = await secrets.send(new GetSecretValueCommand({
    SecretId: '/myapp/prod/third-party-api-key',
  }));
  apiKey = response.SecretString!;
  return apiKey;
}

Input validation at the function boundary. Lambda functions receive events from API Gateway, SQS, SNS, EventBridge, and other sources. Validate every incoming event with a schema library (Zod, TypeBox, Valibot) before processing. Don't trust that upstream services have already validated the data — defense in depth means validating at each boundary.

API Gateway authorization. For HTTP APIs on API Gateway, use Lambda authorizers or Cognito user pool authorizers to authenticate requests before they reach your business logic. Don't implement authentication inside every function handler — move it to a centralized authorizer so every new endpoint is automatically protected.

Migrating from Serverless Framework to SST

Teams who built on Serverless Framework in 2020-2023 frequently ask whether migrating to SST is worth it. Here's an honest assessment.

When migration is worth it: If your team writes TypeScript and you're building new features regularly, SST's development experience provides a meaningful daily productivity advantage. The live Lambda development mode alone can save developers hours per week compared to the deploy-wait-check cycle of Serverless Framework. If you're building a full-stack app with a Next.js frontend, Lambda backend, and DynamoDB, SST's first-class support for this stack (type-safe resource binding, unified logging in the SST Console) is genuinely better than stitching it together in Serverless Framework.

When migration is not worth it: Serverless Framework has thousands of community plugins. If your infrastructure relies on plugins for niche providers or services — Cloudflare Workers, Aiven, Auth0 — these may not have SST equivalents. The migration cost is also real: every serverless.yml function definition needs to be rewritten into SST's TypeScript CDK construct syntax, and all your environment variable references need to be converted to SST's resource binding system. For large, stable projects that aren't actively adding features, the migration ROI is often negative.

The practical migration path: The lowest-risk approach is a hybrid migration. Keep Serverless Framework managing your existing functions while starting new features in SST. Both can coexist in the same AWS account and can share DynamoDB tables and S3 buckets — SST can import existing AWS resources rather than requiring it to own everything. Migrate function by function as you touch them for new work, rather than doing a big-bang rewrite.

Compare serverless framework package health on PkgPulse. Also see how to set up CI/CD for a JavaScript monorepo for deploying from CI and best Node.js logging libraries for observability in Lambda functions.

Related: middy vs Lambda Powertools vs serverless-http 2026.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.