Skip to main content

Dependency Management for Monorepos 2026

·PkgPulse Team
0

TL;DR

Monorepo dependency management in 2026 centers on pnpm workspaces: 60-80% disk reduction, 3-5x faster installs, and hard enforcement of explicit dependency declarations. Add Turborepo for task caching and Changesets for coordinated versioning. The result is a monorepo setup that scales from 5 to 500 packages without becoming a maintenance nightmare.

Key Takeaways

  • pnpm's workspace:* protocol prevents version drift between internal packages — packages always resolve to local workspace copies
  • pnpm eliminates "phantom dependencies" by enforcing strict access rules: packages can only import what they explicitly declare
  • Turborepo's content-hash caching skips tasks whose inputs haven't changed — test caching alone saves hours in large repos
  • Changesets tracks which packages need version bumps and generates coordinated changelogs across all affected packages
  • Bun workspaces are faster than pnpm for small monorepos but lack pnpm's strict hoisting controls for large setups

The Monorepo Dependency Problem

Monorepos bring packages together into a single repository. That's straightforward for small projects. At scale, dependency management becomes the primary pain point:

The node_modules explosion. Traditional monorepo setups install dependencies per-package, resulting in gigabytes of duplicated modules. A monorepo with 30 packages, each depending on React and TypeScript, installs them 30 times.

Phantom dependencies. In hoisted node_modules setups (the npm/Yarn classic behavior), a package can require a module that's only available because another package in the monorepo happens to use it. This works locally but breaks when the package is published — the transitive dependency isn't in its own package.json.

Version drift. Without explicit tooling, different packages in a monorepo drift to different versions of shared dependencies. One package uses zod@3.21, another uses zod@3.23. CI works, but the behavior is inconsistent.

Coordinated versioning. When you change a shared utility library, which downstream packages need version bumps? Without tooling, you find out at publish time — or worse, when a consumer breaks.


pnpm Workspaces: The Standard Solution

pnpm solves all four problems through its content-addressable storage model and strict workspace protocol.

Setting Up pnpm Workspaces

Create a pnpm-workspace.yaml at the root:

packages:
  - 'packages/*'
  - 'apps/*'

Then define your packages in subdirectories, each with their own package.json. Run pnpm install from the root — pnpm installs all packages at once, deduplicating across the workspace.

The workspace: Protocol

The workspace:* version specifier is the key to preventing version drift:

{
  "dependencies": {
    "@myapp/utils": "workspace:*",
    "@myapp/types": "workspace:^1.0.0"
  }
}

workspace:* resolves exclusively to the local workspace package — never to a published npm version. This means:

  • You're always testing with the real local implementation
  • If the local package doesn't exist, the install fails loudly rather than silently falling back to npm
  • At publish time, pnpm replaces workspace:* with the actual current version

For packages where you want to pin to a specific version range (useful for packages that are also published independently), use workspace:^1.0.0.

Eliminating Phantom Dependencies

pnpm's default behavior (shamefully-hoist: false) creates a non-flat node_modules structure. Each package's node_modules contains only the packages it explicitly declares, plus their dependencies. Packages cannot import anything not in their own package.json.

This means phantom dependencies are discovered during development, not in production. If your code require('lodash') but lodash isn't in your package.json, you get an immediate error.

Some legacy packages break with strict hoisting. For those, .npmrc provides escape hatches:

# For known-broken packages that need traditional hoisting
public-hoist-pattern[]=*types*
public-hoist-pattern[]=*eslint*

Performance

pnpm 10.x benchmarks (clean install, 50 dependencies):

  • Cold install: ~4.2s
  • Warm install (cached): ~755ms

The warm install speed comes from pnpm's global content-addressable store: packages are stored once by hash, then linked into each project. A package used by 20 projects isn't stored 20 times — it's stored once and hard-linked.

For an in-depth comparison of pnpm against npm and Yarn on performance metrics, see pnpm 10 vs npm 11 vs Yarn 4 2026.


Turborepo: Task-Level Caching

pnpm manages package dependencies. Turborepo manages task dependencies: which packages need to build when you change something, and which builds can be skipped because nothing changed.

Basic Turborepo Configuration

{
  "$schema": "https://turbo.build/schema.json",
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**"]
    },
    "test": {
      "dependsOn": ["build"],
      "outputs": [],
      "cache": true
    },
    "lint": {
      "outputs": []
    }
  }
}

"dependsOn": ["^build"] means: run all package build tasks before this one. This ensures that when building app-a which depends on lib-shared, lib-shared builds first.

Content-Hash Caching

Turborepo hashes the inputs to each task: source files, dependency versions, environment variables, and task configuration. If the hash matches a previous run, Turborepo replays the cached output.

In a monorepo with 30 packages, changing one package triggers only the affected downstream rebuilds. The other 29 packages return cached results instantly.

For remote caching, Turborepo can store cache artifacts in S3 or Vercel's remote cache — enabling cache sharing across developer machines and CI runs.

For a full guide on Turborepo setup, see How to Set Up a Monorepo with Turborepo 2026.


Changesets: Coordinated Versioning

Changesets manages the versioning problem: when packages change, what should be bumped, and how should changelogs be generated?

The Changeset Workflow

  1. Developer makes changes to packages/utils
  2. Developer runs pnpm changeset — interactive prompt asks which packages were changed and what kind of change (major/minor/patch)
  3. A changeset file is created in .changeset/ and committed with the PR
  4. On merge to main, the automated changeset version command updates all affected package versions and generates changelogs
  5. changeset publish publishes only the packages that have version bumps

This workflow handles the transitive bump problem: if lib-utils gets a minor bump, and app-dashboard depends on lib-utils, Changesets automatically bumps app-dashboard to reflect the dependency change.

CI Integration

# .github/workflows/release.yml
- name: Create Release PR or Publish
  uses: changesets/action@v1
  with:
    publish: pnpm changeset publish
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

The Changesets GitHub Action maintains a "Version Packages" PR that accumulates all pending changesets. Merging this PR triggers the actual version bumps and npm publishes.

For the full package publication workflow including npm provenance and access tokens, see Publishing an npm Package: Complete Guide 2026.


Handling Peer Dependencies

Peer dependencies are a common source of confusion in monorepos. When multiple packages depend on a peer (like React), you want the same React instance throughout.

pnpm handles this via the peerDependencyRules setting in .npmrc or package.json:

{
  "pnpm": {
    "peerDependencyRules": {
      "ignoreMissing": ["@types/*"],
      "allowAny": ["react"]
    }
  }
}

For packages that truly need to share a singleton (React, Zustand stores), ensure they're all listed as peerDependencies and resolved from the workspace root.


Migrating from npm/Yarn to pnpm Workspaces

Common migration pain points and solutions:

Removing node_modules everywhere:

find . -name 'node_modules' -type d -prune -exec rm -rf '{}' + && pnpm install

Handling packages that break with strict hoisting: Add them to public-hoist-pattern in .npmrc. Most failures are Jest configuration files and ESLint plugins that rely on global resolution.

Converting Yarn workspace references: Yarn's workspace: protocol is identical to pnpm's — no changes needed if you're using modern Yarn. If upgrading from Yarn classic, replace * version pins with workspace:*.

For a broader comparison of workspace support across package managers, see Best npm Workspaces Alternatives 2026.


For a new monorepo in 2026:

ToolRoleVersion
pnpmPackage manager + workspaces10.x
TurborepoTask orchestration + caching2.x
ChangesetsVersioning + changelog2.x
TypeScript project referencesType-safe cross-package imports5.x

This stack handles most monorepo dependency management challenges without requiring custom scripts or complex configuration.


Advanced pnpm Workspace Patterns

Version Catalogs

pnpm 9+ introduced the concept of version catalogs — a centralized place to define shared dependency versions across your workspace. This solves the problem of different packages specifying different versions of the same shared dependency.

In pnpm-workspace.yaml:

packages:
  - 'packages/*'
  - 'apps/*'

catalog:
  react: ^18.3.0
  react-dom: ^18.3.0
  typescript: ^5.4.0
  vitest: ^1.5.0

Then in individual package.json files:

{
  "dependencies": {
    "react": "catalog:",
    "react-dom": "catalog:"
  }
}

The catalog: specifier resolves to whatever version is defined in the workspace catalog. Update React across all packages by changing one line in pnpm-workspace.yaml.

Filtering for Targeted Operations

pnpm's --filter flag is critical for large monorepos where running commands across all packages is expensive:

# Run tests only for packages affected by recent changes
pnpm --filter "...[origin/main]" test

# Run build for a specific package and all its dependencies
pnpm --filter "@myapp/web..." build

# Run a command for all packages matching a pattern
pnpm --filter "@myapp/*" lint

The ... suffix means "and all dependencies" — essential for running builds in the right order without manually tracking the dependency graph.

Deduplicate Dependencies

Over time, monorepos accumulate multiple versions of the same package through independent package.json updates. pnpm provides a built-in deduplication command:

pnpm dedupe

This collapses duplicate packages to the highest compatible version across the workspace, reducing install size and improving consistency.


Continuous Integration Patterns

Efficient CI for monorepos avoids running everything on every commit. The key is affected package detection:

# GitHub Actions with affected package detection
- name: Run affected tests
  run: pnpm --filter "...[HEAD~1]" test

# Or with Turborepo's affected detection
- name: Turbo build affected
  run: pnpm turbo build --filter="...[HEAD~1]"

For pull requests, base the filter on the target branch:

pnpm --filter "...[origin/${{ github.base_ref }}]" test

This ensures CI runs only the tests that matter for the changeset, rather than running all 300 packages on every PR.

The combination of affected detection + Turborepo's remote cache means that in a 50-package monorepo, a PR touching 3 packages runs tests for those 3 packages plus their dependents — skipping the other 40+ packages whose test results are already cached.


TypeScript Project References

A monorepo where packages share code also needs shared types to work correctly across packages. TypeScript project references enable incremental type-checking and cross-package Go-to-Definition in IDEs.

In each package's tsconfig.json, add references to its imported packages:

{
  "compilerOptions": {
    "composite": true,
    "declarationDir": "./dist"
  },
  "references": [
    { "path": "../utils" },
    { "path": "../types" }
  ]
}

The composite: true setting enables incremental compilation — TypeScript tracks which packages' definitions changed and rebuilds only affected packages. With project references, Go-to-Definition works across packages, taking you to source files rather than compiled .d.ts files.

This makes a significant difference to the day-to-day development experience in a large monorepo: navigating between packages feels seamless rather than opaque.


When Monorepos Aren't Right

Monorepos solve real problems but create new ones. They work well when multiple packages share significant code or types, when you want atomic commits across packages, or when coordinated releases matter. They're harder when different packages have incompatible technology stacks, when teams need complete autonomy, or when the repository grows to thousands of packages.

Many successful setups use a hybrid: shared utilities in a monorepo, product applications as separate repositories that consume the utilities as published packages. This preserves shared-code benefits without forcing every team onto the same monorepo tooling.


The Hidden Cost of Dependency Sprawl

There is a moment in every growing monorepo's lifecycle where someone asks a simple question: "How many dependencies do we actually have?" The answer is usually alarming. Large monorepos accumulate dependencies through a slow process of incremental, well-intentioned decisions. An engineer needs date formatting, so they add date-fns. Another needs a color picker component, which pulls in three UI sub-libraries. A third adds a validation library when the existing one "isn't quite right." Each individual addition is defensible. The cumulative result is a package.json with 80 direct dependencies, hundreds of transitive dependencies, and a lockfile measured in tens of thousands of lines.

This pattern is so common it has a name: dependency sprawl. It emerges not from reckless behavior but from the "just add a package" culture that permeates JavaScript development. npm's frictionless install model — a single command and you have a new capability — makes it psychologically easy to reach for an external library rather than writing even a small utility function yourself. The marginal cost of any single addition seems negligible. Over months and years, the aggregate cost becomes very real.

Consider what 80 top-level dependencies actually means in practice. Bundle size inflation is the most visible consequence: every dependency that ships runtime code contributes to your final bundle, and tree-shaking only helps when packages are properly structured for it. A package imported for one utility function may drag along far more than you expect. Security surface area grows proportionally with dependency count — each package is a potential vector for a supply chain attack, and auditing hundreds of transitive dependencies is not something any team does thoroughly. CI pipeline time creeps up because npm install with a large, fragile lockfile takes longer, lockfile merge conflicts happen more frequently, and fresh installs on new CI runners or machines take minutes instead of seconds.

The subtler costs are the ones that compound most dangerously. Duplicate versions of the same library across workspaces — one package using zod@3.21 while another uses zod@3.23 — introduce inconsistent behavior that's difficult to diagnose. Transitive dependency conflicts, where two packages require incompatible versions of a shared dependency, can require hours of version resolution work. The node_modules directory becomes an archaeological dig site where no one is certain which packages are actually needed.

Successful engineering teams treat dependency management as an ongoing maintenance discipline rather than a one-time setup task. A quarterly dependency audit is the minimum recommended cadence. The audit has two goals: identifying packages that can be removed because they are unused or redundant, and identifying packages where a single library can replace several. Tools like depcheck automate the unused dependency detection — they analyze your codebase's actual imports and flag anything in package.json that nothing imports. The results are consistently surprising: most mature codebases have between 15% and 30% of their declared dependencies going entirely unused.

The cultural shift required is to treat every new dependency as a liability that must justify its cost, not a free addition to your toolkit. This means asking, before installing: Does this package solve a problem we couldn't solve in fifty lines of code? Does it have a realistic maintenance trajectory? Is it worth the ongoing cost in security monitoring, version management, and bundle impact? This is not a case for reinventing wheels — npm's ecosystem is genuinely valuable. It is a case for treating that value as finite and spending it deliberately.


Dependency Update Strategies: Automated vs. Manual

The update strategy question sits at the intersection of security and stability. Left unaddressed, it defaults to the worst possible answer: dependencies never update, vulnerabilities accumulate, and eventually someone upgrades eight months of changes in a single afternoon — discovering three breaking changes and a security patch all at once. A deliberate strategy avoids this scenario by making updates routine rather than exceptional.

The spectrum runs from "never update manually" at one extreme to "automatically merge every available update" at the other. Neither extreme is right. The manual-only approach creates the accumulation problem described above and makes security response slow. The auto-merge-everything approach means that a malicious patch-version publication — a real and documented attack vector — could land in your production codebase without human review. The right answer lies in the middle, and the tooling has matured to make that middle ground practical.

Renovate Bot and Dependabot are the two dominant automated dependency update tools, and they represent different philosophies. Dependabot is simpler and GitHub-native: it scans your package.json and lockfile, identifies available updates, and opens a PR for each one. Configuration is minimal. For small projects with few dependencies, this works well. The problem at scale is PR volume: a monorepo with fifty packages and two hundred dependencies can generate dozens of Renovate or Dependabot PRs per week, creating noise rather than signal.

Renovate's strength is grouping and scheduling. You can configure Renovate to collect all patch-level updates into a single weekly PR rather than one PR per package, to group all React ecosystem updates together, and to require manual approval for major version bumps. This dramatically reduces PR volume while keeping the security update cadence high. Renovate also understands monorepos natively — it can update the same package across all workspace packages in one coordinated change rather than creating separate PRs per workspace.

The semantic versioning dimension matters enormously for automation policy. Patch updates — bug fixes and security patches — are the lowest risk category and the most time-sensitive for security. A reasonable policy is to auto-merge patch updates after CI passes, without human review. This keeps security patches flowing without creating review burden. Minor updates introduce new features but should not break existing behavior. These warrant a review but not necessarily deep scrutiny — read the changelog, verify CI passes, merge. Major updates are explicitly breaking changes. They should require a dedicated review, explicit testing, and a planned merge window rather than opportunistic merging.

The concept of a "merge freeze" is worth incorporating for releases. In the week before a planned release, halt all non-critical dependency updates. A dependency update that introduces unexpected behavior one day before release is one of the most disruptive debugging scenarios a team can face. Many Renovate configurations include a built-in freeze schedule that respects release windows automatically.

One practice to avoid categorically is the "update all dependencies" mega-PR. The appeal is obvious — clear the backlog in one fell swoop. The problem is blast radius: if anything breaks, you're debugging every dependency that changed simultaneously rather than isolating a single update. Automated tools are valuable precisely because they create small, targeted, individually-verifiable changes rather than one large undifferentiated update.


Methodology

This article draws on:

  • pnpm 10.x documentation and benchmark results (pnpm.io/benchmarks)
  • Turborepo 2.x documentation and caching strategy guides
  • Changesets documentation (changesets/changesets on GitHub)
  • Community monorepo guides from jsdev.space and nesbitt.io
  • pnpm workspace protocol specification
  • npm benchmarks from edbzn/package-manager-benchmarks GitHub repository

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.