How to Set Up CI/CD for a JavaScript Monorepo 2026
TL;DR
Turborepo + GitHub Actions + remote cache = monorepo CI that runs in under 2 minutes. The key: only run what changed using --filter=[HEAD^1], share cache across PRs with Vercel Remote Cache, and parallelize with matrix builds. Without these, monorepos have CI times that scale with repo size; with them, CI time is flat regardless of how many packages you add.
Key Takeaways
--filter=[HEAD^1]— only build/test packages changed vs last commit- Remote cache — share build artifacts across CI runs (80%+ cache hit rate)
- Matrix strategy — run tests per app in parallel
turbo prune— Docker optimization: only install deps for affected apps- Separate deploy workflows — deploy each app independently when it changes
The Monorepo CI Problem
Monorepos have a scaling problem: as you add more apps and packages, CI gets slower. A naive approach runs all tests for all packages on every commit — even when you only changed one component in one package. With 20 packages, this means running 19 unnecessary test suites on every PR.
The solution is affected-package detection: only run tasks for packages that changed (or depend on something that changed). Turborepo does this automatically with its --filter=[HEAD^1] flag, which computes the affected dependency graph and runs only what needs running.
Combined with remote caching (sharing build artifacts between CI runs), monorepo CI can achieve sub-2-minute times regardless of repository size. A cache hit on unchanged packages means zero work — just a cache restore that takes seconds.
Basic CI Workflow
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true # Cancel old runs when new commit pushed
jobs:
ci:
name: Build, Lint, Test
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2 # Need 2 commits for --filter=[HEAD^1]
- uses: pnpm/action-setup@v3
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Lint
run: pnpm turbo lint --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
- name: Type check
run: pnpm turbo type-check --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
- name: Build
run: pnpm turbo build --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
- name: Test
run: pnpm turbo test --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
The --filter=[HEAD^1] syntax tells Turborepo to only run tasks for packages that changed between the current commit (HEAD) and the previous commit (HEAD^1). The fetch-depth: 2 in the checkout step is required to have both commits available.
Remote Cache Setup
# Get Vercel Remote Cache credentials
npx turbo login
npx turbo link
# Add to GitHub repo secrets:
# TURBO_TOKEN: your Vercel token
# Add to GitHub repo variables:
# TURBO_TEAM: your Vercel team slug
# GitHub Actions with remote cache
- name: Build with remote cache
run: pnpm turbo build --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
# CI run times with remote cache:
# PR with cache hit (common case): 15-45 seconds
# PR with cache miss (first run or changed package): 2-5 minutes
Remote cache stores build artifacts (compiled output, test results) in Vercel's infrastructure. When a PR's CI run reaches a build step for a package that was already built with the same inputs (same source files, same dependencies), Turborepo downloads the cached output instead of rebuilding. A cache hit on a build task typically takes 3-5 seconds regardless of how long the actual build would take.
For self-hosted remote cache, Turborepo supports ducktape and other open-source alternatives if you don't want to use Vercel's infrastructure.
Parallel Testing with Matrix Strategy
# Run tests for each app in parallel
jobs:
test:
strategy:
matrix:
app: [web, api, admin]
name: Test ${{ matrix.app }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
- name: Test ${{ matrix.app }}
run: pnpm turbo test --filter=@myapp/${{ matrix.app }}
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
Matrix strategy runs multiple jobs in parallel. If each app's tests take 3 minutes, three parallel jobs take 3 minutes total instead of 9 minutes sequentially. GitHub Actions runs matrix jobs concurrently (up to 20 per workflow by default).
Deployment per App
# .github/workflows/deploy-web.yml
name: Deploy Web
on:
push:
branches: [main]
paths:
- 'apps/web/**'
- 'packages/**' # Shared packages also trigger deploy
jobs:
deploy-web:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
- name: Build web app
run: pnpm turbo build --filter=@myapp/web
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
NEXT_PUBLIC_API_URL: ${{ vars.NEXT_PUBLIC_API_URL }}
- name: Deploy to Vercel
uses: amondnet/vercel-action@v25
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID_WEB }}
working-directory: apps/web
The paths filter on the workflow trigger is important. Without it, every push to main deploys every app — including apps that didn't change. With paths, the web deploy only triggers when apps/web/** or packages/** change.
Secrets Management
# Environment-specific secrets in GitHub Actions
# Store per-environment secrets in GitHub Environment configs
jobs:
deploy-production:
environment: production # Uses 'production' environment secrets
steps:
- name: Deploy
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }} # Scoped to 'production' environment
API_KEY: ${{ secrets.API_KEY }}
# .github/workflows/ci.yml — use repository secrets for CI
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
# Non-sensitive vars can be in repository variables (visible in logs)
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
GitHub Environments let you scope secrets to deployment environments. secrets.DATABASE_URL in a production environment is different from the same secret in staging. This prevents accidentally deploying staging credentials to production.
Docker with turbo prune
# turbo prune: create a minimal workspace for a single app
# Only includes the app and its package dependencies
npx turbo prune --scope=@myapp/web --docker
# Creates:
# out/
# ├── full/ ← Full source with only web + its deps
# └── json/ ← package.json files only (for dep install layer)
# Dockerfile for apps/web
FROM node:20-alpine AS base
RUN npm install -g pnpm
# Prune: install only the packages needed for this app
FROM base AS pruner
WORKDIR /app
COPY . .
RUN npx turbo prune --scope=@myapp/web --docker
# Install: build dependency cache layer
FROM base AS installer
WORKDIR /app
COPY --from=pruner /app/out/json/ .
RUN pnpm install --frozen-lockfile
# Build
FROM base AS builder
WORKDIR /app
COPY --from=installer /app/node_modules ./node_modules
COPY --from=pruner /app/out/full/ .
RUN pnpm turbo build --filter=@myapp/web
# Production image
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/apps/web/.next ./.next
COPY --from=builder /app/apps/web/public ./public
COPY --from=builder /app/apps/web/package.json .
EXPOSE 3000
CMD ["node", "server.js"]
turbo prune creates a minimal workspace containing only the target app and its transitive package dependencies. Without pruning, a Docker build for apps/web would install all dependencies from all packages in the monorepo — most of which apps/web doesn't need. Pruning reduces Docker image size and build time.
Common CI/CD Mistakes in Monorepos
Monorepo CI is complicated enough that teams frequently make the same mistakes. These are the patterns that most reliably cause slow, flaky, or broken pipelines.
Mistake 1: Using fetch-depth: 0 (or 1) instead of 2. The --filter=[HEAD^1] syntax compares HEAD against HEAD^1 — the previous commit. If you only fetch a depth of 1, HEAD^1 doesn't exist and Turborepo falls back to running everything. Always use fetch-depth: 2 in your checkout action for affected-package filtering to work.
Mistake 2: Not pinning the TURBO_TOKEN. Teams sometimes add TURBO_TOKEN as a plain environment variable at the job level rather than a GitHub secret. This exposes the token in workflow logs. Always use ${{ secrets.TURBO_TOKEN }} for credentials.
Mistake 3: Skipping concurrency groups. Without a concurrency block, pushing two commits rapidly to the same branch queues both CI runs. The first run's results are stale by the time it finishes. The cancel-in-progress: true setting ensures only the latest commit runs.
Mistake 4: Running turbo without --filter on the main branch. Some teams use --filter=[HEAD^1] on PRs but run everything on main pushes to ensure nothing is missed. This is reasonable but means main branch CI is slower. A better approach: use --filter=[origin/main...HEAD] to compare against the merge base rather than just the previous commit, which catches all changes in a PR even when commits are squash-merged.
Mistake 5: Missing packages/** in deploy path triggers. If your deploy workflow only watches apps/web/** and ignores packages/**, a change to a shared utility package won't trigger a redeploy of the apps that depend on it. Always include packages/** in the paths filter for any app that has package dependencies.
Mistake 6: Storing secrets as GitHub repository variables instead of secrets. GitHub has both vars (visible in logs) and secrets (masked). TURBO_TEAM is a non-sensitive slug and correctly uses vars. TURBO_TOKEN, DATABASE_URL, and VERCEL_TOKEN should always use secrets. Mixing them up exposes credentials in workflow run logs.
Nx Alternative Configuration
Turborepo isn't the only option. Nx is a full-featured monorepo tool from Nrwl with a different philosophy: more prescriptive project structure, built-in code generation, and a richer plugin ecosystem. The CI configuration for Nx is similar in principle but different in syntax.
# .github/workflows/ci-nx.yml — Nx affected builds
name: CI (Nx)
on:
pull_request:
branches: [main]
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Nx needs full history for base detection
- uses: pnpm/action-setup@v3
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
# Nx Cloud for remote cache (analogous to Turbo remote cache)
- name: Set SHAs for affected
uses: nrwl/nx-set-shas@v4
# Sets NX_BASE and NX_HEAD env vars for affected detection
- name: Lint affected
run: pnpm exec nx affected --target=lint --base=$NX_BASE --head=$NX_HEAD
- name: Test affected
run: pnpm exec nx affected --target=test --base=$NX_BASE --head=$NX_HEAD --parallel=3
- name: Build affected
run: pnpm exec nx affected --target=build --base=$NX_BASE --head=$NX_HEAD
env:
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
Nx's affected command uses a similar graph-based approach to Turborepo's --filter=[HEAD^1] but relies on full git history (fetch-depth: 0) and the nrwl/nx-set-shas action to determine the correct base commit. Nx Cloud is the remote cache equivalent of Vercel Remote Cache for Turborepo.
The choice between Turborepo and Nx often comes down to how much scaffolding you want. Turborepo is configuration-light and gets out of your way; Nx provides more structure, code generators, and plugins at the cost of more initial learning. For teams starting from scratch, Turborepo is faster to get running. For larger teams that value consistency and code generation, Nx's stronger conventions pay dividends.
Performance Optimization: Cutting CI Time Further
Once you have the basics running with remote cache and affected filtering, there are several additional optimizations that can push CI times down further for large monorepos.
Use larger runners for heavy builds. GitHub's default ubuntu-latest runner has 2 CPU cores. For TypeScript compilation-heavy builds, ubuntu-latest-4-cores cuts build time roughly in half. GitHub charges 2x for 4-core runners and 4x for 8-core — measure whether the time savings justify the cost for your CI volume.
Shard test runs with Vitest. For large test suites, Vitest's built-in sharding splits tests across multiple parallel runners:
jobs:
test:
strategy:
matrix:
shard: [1, 2, 3, 4]
steps:
- name: Run tests (shard ${{ matrix.shard }}/4)
run: pnpm vitest run --shard=${{ matrix.shard }}/4
Each shard runs independently in parallel. Four shards on a test suite that takes 8 minutes on one runner takes ~2 minutes with four runners.
Cache node_modules separately from Turborepo cache. The pnpm action with cache: 'pnpm' caches the pnpm store (downloaded packages). This helps, but pnpm install --frozen-lockfile still takes 20–40 seconds even with a warm cache because it links packages. For very large monorepos, caching the full node_modules directory using actions/cache with a lockfile hash key can shave 30+ seconds off every CI run.
Limit matrix parallelism. By default, GitHub Actions runs all matrix jobs simultaneously up to the account's concurrency limit. If you have 20 apps in a matrix, you might queue 20 jobs simultaneously and exhaust concurrency limits, causing waits for other PRs. Use max-parallel to cap concurrent jobs:
strategy:
matrix:
app: [web, api, admin, dashboard]
max-parallel: 3 # Never more than 3 simultaneous runners
Environment Variable Strategy for Monorepos
Monorepos with multiple apps face a unique challenge: different apps need different environment variables, and some variables are shared across apps. A consistent strategy prevents the common failure mode of an app deploying with wrong or missing environment variables.
The recommended layered approach:
Level 1: Repo-level variables. Variables shared by all apps in the CI pipeline — Turborepo credentials, shared API keys, and CI-only configuration — live as GitHub repository secrets and variables. These are available to every workflow without configuration.
Level 2: Environment-scoped variables. Per-environment configuration (staging DATABASE_URL vs. production DATABASE_URL) lives in GitHub Environments. Your deploy workflow specifies environment: production to get production secrets. This prevents staging secrets from being used in production deploys and enables environment protection rules (e.g., require approval for production deploys).
Level 3: App-specific variables. Variables unique to one app (its Vercel project ID, its specific API endpoint) live in the individual deploy workflow file as vars or as app-specific environment secrets. Use a naming convention to avoid collisions: VERCEL_PROJECT_ID_WEB, VERCEL_PROJECT_ID_API, VERCEL_PROJECT_ID_ADMIN.
# Example: production deploy with full variable strategy
jobs:
deploy-web:
environment: production # Level 2: env-scoped secrets
runs-on: ubuntu-latest
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} # Level 1: repo-wide
TURBO_TEAM: ${{ vars.TURBO_TEAM }} # Level 1: repo-wide
steps:
- name: Deploy
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }} # Level 2: env-scoped
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }} # Level 2: env-scoped
VERCEL_PROJECT_ID: ${{ vars.VERCEL_PROJECT_ID_WEB }} # Level 3: app-specific
run: ./scripts/deploy-web.sh
Document which variables each app needs and where they're sourced. A missing variable in a deploy is a common failure that's difficult to diagnose because the app deploys successfully but behaves incorrectly at runtime. An .env.example file per app, kept in sync with deployment documentation, prevents this.
Compare monorepo tooling on PkgPulse. Also see Vitest vs Jest for test configuration and Playwright vs Cypress for E2E tests in your CI pipeline.
Related: Best Monorepo Tools in 2026: Turborepo vs Nx vs Moon.
See the live comparison
View turborepo vs. nx on PkgPulse →