Stop Installing Libraries You Don't Need 2026
TL;DR
The most dangerous command in JavaScript development is npm install. Every library you add is a security surface, a maintenance obligation, a potential breaking change, and extra bytes in your bundle. The question isn't "does this library solve my problem?" — it's "is the problem worth the cost of a dependency?" Most libraries added to projects are used for 5-10% of their API, provide one function you could write in 20 lines, or could be replaced by a built-in browser/Node.js API. Here's how to make the call.
Key Takeaways
- Every dependency costs: bundle size + security surface + update overhead + breaking change risk
- The 80/20 rule: most packages are used for 20% of their API (or less)
- The "Could I write this in 30 minutes?" test — if yes, consider it
- Native alternatives exist for many commonly-installed packages
- The packages worth installing: complex algorithms, battle-tested security, rich APIs you actually use
The Real Cost of a Dependency
Installing lodash costs you:
1. Immediate costs:
→ 71KB gzipped added to your bundle (if not tree-shaken)
→ 350KB unpacked in node_modules
→ 5+ seconds added to CI install time
→ 28 packages in the dependency tree (lodash is actually zero deps, but many aren't)
2. Ongoing costs:
→ Security monitoring: lodash has had 3 high CVEs in 5 years
→ Update maintenance: someone has to review lodash upgrades
→ Breaking changes: lodash v5 will have breaking changes
→ Lock-in: "we use lodash everywhere" makes future decisions harder
3. Opportunity cost:
→ Every dep is cognitive load for new developers
→ "Why do we use lodash here instead of Array.flat()?"
→ Onboarding slower, codebase harder to understand
4. The supply chain risk:
→ Lodash is trustworthy (large team, active)
→ But you install lodash → you implicitly trust all future lodash maintainers
→ Events like protestware, account takeover, typosqatting all exploited this trust
The decision calculus:
Is what this library gives me worth all of the above?
For lodash: often no (modern JS covers 80% of it)
For TanStack Query: yes (replaces 500 lines of custom fetching logic)
For Zod: yes (type-safe validation is complex to build well)
The Test Before Every npm install
# Ask these 5 questions before installing:
# 1. Does a native browser/Node.js API do this?
# Examples:
# fetch() → replaces node-fetch
# crypto.randomUUID() → replaces uuid
# Array.prototype.flat() → replaces _.flatten
# new URL() → replaces url.parse()
# structuredClone() → replaces _.cloneDeep
# Optional chaining (?.) → replaces _.get(obj, 'path.to.value')
# 2. Can I write this in under 30 minutes?
# If yes, write it. Own the code. No external dependencies.
#
# "I need to debounce a function"
function debounce<T extends (...args: unknown[]) => unknown>(
fn: T,
delay: number
): (...args: Parameters<T>) => void {
let timer: ReturnType<typeof setTimeout>;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), delay);
};
}
# 8 lines. No lodash. No package. No security surface.
# 3. Is this problem complex enough to justify someone else's solution?
# YES: cryptography, parsing algorithms, UI components with accessibility
# NO: basic array operations, simple string manipulation, one-off utilities
# 4. What's the package's health/activity?
npm view package-name | grep -E "latest|maintainers|license"
# Last published > 2 years ago: risk
# 1 maintainer, no recent activity: risk
# 0 downloads growth: ecosystem may be abandoning it
# 5. What does it actually cost?
npx bundlephobia package-name
# If it's 1KB and does 50+ things: install it (nanoid, ms, clsx)
# If it's 300KB and you use 1 function: don't (moment for formatting 1 date)
Packages Commonly Installed But Often Unnecessary
// Case 1: axios (11KB gzipped)
// "I need to make HTTP requests"
import axios from 'axios';
const data = await axios.get('/api/users').then(r => r.data);
// Native alternative:
const response = await fetch('/api/users');
const data = await response.json();
// When to still use axios:
// → Need request cancellation with AbortController? (fetch does this now)
// → Need interceptors for auth tokens? (fetch + wrapper does this)
// → Honestly: ky (2.5KB) is a better axios alternative in 2026
// Case 2: classnames/clsx
// "I need conditional CSS classes"
import clsx from 'clsx';
const className = clsx('base', isActive && 'active', { disabled: !enabled });
// Native alternative (for simple cases):
const className = ['base', isActive && 'active', !enabled && 'disabled']
.filter(Boolean).join(' ');
// When to use clsx: complex conditional classes (it IS tiny at 0.5KB)
// When to skip it: 2-3 simple conditions
// Case 3: lodash for one function
import { debounce } from 'lodash';
// → Write debounce yourself (8 lines, shown above)
import { groupBy } from 'lodash';
const grouped = groupBy(items, 'category');
// → Object.groupBy is now built-in (Chrome 117+, Node 21+)!
import { cloneDeep } from 'lodash';
// → structuredClone() built-in
import { merge } from 'lodash'; // Only legit use case for lodash
// Deep merge is legitimately complex. This is worth it.
// Case 4: is-* packages
import isEmail from 'is-email'; // 2KB
// → const isEmail = (s: string) => /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(s);
// → Or use a validation library you already have (Zod, Yup)
import isUrl from 'is-url';
// → try { new URL(str); return true; } catch { return false; }
// 1 line. No package.
Packages Absolutely Worth Installing
Some problems are hard enough that a library is always the right answer:
Cryptography:
→ bcrypt (password hashing) — DON'T write your own
→ jose (JWT signing/verification) — crypto is easy to get wrong
→ argon2 — modern password hashing with proper defaults
Form validation with type safety:
→ Zod (14KB) — runtime + compile-time validation, complex to replicate
→ The schema inference magic is genuinely valuable
→ Writing your own would be 200+ lines for equivalent safety
Complex date handling (before Temporal):
→ date-fns (tree-shakeable, tiny when used properly)
→ The edge cases in date arithmetic are non-obvious
→ Timezones especially: don't write your own timezone handling
HTTP clients with retry logic + auth:
→ ky (2.5KB) — fetch wrapper with retries, timeout, hooks
→ Writing retry/timeout/abort correctly has many edge cases
Rich UI interactions:
→ framer-motion for complex animations (physics simulation)
→ Radix UI for accessible UI primitives
→ These solve accessibility problems that are legitimately complex
Database ORMs:
→ Prisma/Drizzle for type-safe queries
→ The type generation is worth the dependency
The pattern:
→ If the problem has security implications: use a library
→ If the problem has complex edge cases (dates, parsing): use a library
→ If the problem is something you've implemented twice with bugs: use a library
→ If the problem is "I need to call map() and filter()": don't use a library
The Audit: Removing Dependencies You Don't Need
# Find unused dependencies (these should always be removed):
npx depcheck
# Reports:
# Unused dependencies:
# * lodash
# * moment
# * old-package-we-forgot-about
# Find packages where you use <20% of the API:
# This requires manual code review, but grep helps:
# How many lodash functions do you actually use?
grep -rh "from 'lodash'" src/ | sort | uniq
# If result: 2-3 functions → inline them and remove lodash
# Find massive packages where you use one thing:
grep -r "import.*from 'moment'" src/ | wc -l
# If result: 3 imports → switch to dayjs or Temporal
# Check for native alternatives:
grep -r "from 'uuid'" src/ | head -5
# → Replace with crypto.randomUUID()
grep -r "node-fetch\|cross-fetch" src/
# → Remove, use native fetch (Node 18+)
# After removing:
npm uninstall lodash moment node-fetch uuid cross-fetch
npm install dayjs # If you actually need date manipulation
npm run build # See the bundle savings
The "Dependency Budget" Mental Model
Treat your dependencies like a budget you spend carefully:
Your budget: 50 production dependencies
(Average mature project has 20-50 direct production deps)
HIGH-VALUE spends (worth multiple "budget units"):
→ TanStack Query: replaces 500+ lines of custom caching logic
→ Zod: type-safe validation with TypeScript integration
→ Prisma/Drizzle: type-safe DB queries
→ Radix UI: accessible UI primitives
→ Stripe SDK: payment processing security
LOW-VALUE spends (often not worth it):
→ is-email, is-url, is-integer: 3 lines of code
→ lodash (for 1-2 functions): write them yourself
→ uuid (in Node 18+): crypto.randomUUID()
→ node-fetch (in Node 18+): native fetch
→ classnames: Array.filter().join()
ZERO-VALUE spends (always remove):
→ Packages you're not using (depcheck finds these)
→ Packages whose functionality is now native
→ Old polyfills for ES features you're now targeting natively
The goal: every dependency should be intentional.
Not "we needed it once" but "we use this throughout, it's worth the cost."
The Node.js 22 Built-In Checklist
Upgrading to Node.js 22 eliminates an entire category of utility packages that have accumulated in JavaScript codebases over years when the stdlib lagged behind developer needs. The savings are concrete.
HTTP clients: node-fetch and cross-fetch can be removed entirely. The native fetch API has been stable in Node.js since v21 and is fully performant for standard use cases including streaming responses, AbortController cancellation, and custom headers. axios is still worth keeping if you rely on interceptors or automatic retry behavior — but for straightforward request/response patterns, native fetch is sufficient and has no bundle cost.
Unique IDs: uuid for v4 random IDs is unnecessary. crypto.randomUUID() is built into Node.js 14.17+ and produces spec-compliant v4 UUIDs. One function, no package, no security surface, no version to maintain.
File system utilities: mkdirp is replaced by fs.mkdirSync(path, { recursive: true }), which has been available since Node.js 10.12. rimraf is replaced by fs.rmSync(path, { recursive: true, force: true }), available since Node.js 14.14. Both of these packages exist purely because the native APIs added the recursive option later — if you are on Node 22, you do not need either.
Glob patterns: fs.glob() landed in Node.js 22 as an experimental API. For simple file pattern matching it replaces the glob package in many use cases. It is still experimental, so judge the risk for production use, but for build scripts and tooling it is already useful.
Environment variable loading: the dotenv package can be removed for most use cases. Since Node.js 20.6, you can load .env files natively with the --env-file=.env flag. For more complex dotenv behavior (variable expansion, multiple files) the package is still useful, but for the common case of loading a single .env file, the Node.js native flag works.
For browser-targeting code, the Web Platform API surface covers equivalent ground: Fetch, crypto.randomUUID(), URL, URLSearchParams, AbortController, and Web Streams are all available natively, eliminating an entire category of polyfill packages that were standard in pre-2022 codebases.
The Code Review Protocol for Dependencies
The easiest time to prevent a bad dependency from entering your codebase is the PR that first installs it. Once a package has been in your codebase for 18 months and accumulated 40 call sites, removing it is a multi-day project. The asymmetry is stark: 10 minutes of friction at install time versus days of refactoring later.
A workable team protocol: any PR that adds a new direct production dependency to package.json requires three things in the PR description. First, a brief justification explaining why existing dependencies, the Node.js stdlib, or platform APIs do not solve the problem. This does not need to be long — two sentences is enough — but it forces the author to verify that they checked alternatives. Second, a bundlephobia link showing the gzipped size of the package and its dependency count. Third, confirmation that the package is actively maintained: last release within 12 months, open issues with maintainer responses, and a look at the download trend.
This protocol works because it adds friction proportional to the decision weight. Adding a 50KB package with 15 transitive dependencies should feel heavier than a one-line utility — and requiring the above documentation creates that weight without being bureaucratic about it.
The audit side is equally important. A monthly review of npm ls --depth 0 with the team catches packages that nobody remembers adding, packages that have been superseded by other tools already in the codebase, and packages where the original use case no longer applies. depcheck automates the unused package portion: it identifies direct dependencies that have no import in the codebase. npm-why shows which package requires each transitive dependency, which helps when you want to understand whether a large transitive dependency has an alternative import path.
The compounding effect is real. A codebase that enforces this protocol for two years will have 30–40 direct production dependencies. A codebase that does not will have 80–100. The difference in install time, audit surface, upgrade burden, and new developer onboarding time is significant — and it grows every year.
The "npm install First, Code Later" Reflex and How to Break It
There is a deeply ingrained pattern in JavaScript development where the response to any new problem is to search for a package before writing a line of code. This is partly rational — npm has 2.5 million packages, and the probability that someone has already solved a common problem is high. But the reflex has become so automatic that it bypasses the evaluation step entirely. Developers install packages not because they've confirmed the package is the best solution, but because the package appeared first in a search result and installing it is faster than thinking.
The cost of this reflex is invisible at first and compounding over time. A utility installed in the first week of a project to solve a problem that three lines of native code would handle is trivial in isolation. Repeat that decision forty times over eighteen months and you have a package.json with eighty direct dependencies, half of which are used for one function or are now redundant with a built-in platform API. Removing them is no longer a quick task — it requires auditing import sites, verifying behavior equivalence, and coordinating the removal with a team that may have forgotten why the package was added in the first place.
Breaking this reflex requires making the evaluation step explicit rather than optional. The discipline of spending sixty seconds on four questions before running npm install — does a native API cover this, could I write it in thirty minutes, what is the gzipped size, when was it last published — creates just enough friction to catch the cases where a package isn't actually necessary. This is not about imposing bureaucracy. It's about making deliberate choices rather than reflexive ones. The cases where a package is clearly the right answer survive this evaluation trivially. The cases where it isn't get caught before they compound.
For teams, the most effective intervention is normalizing the "write it first, install if necessary" workflow. When a developer reaches for a package for a small utility, the team asks: is this something we should own? Owning code has real costs — maintenance, correctness, edge cases — but for simple, stable utilities those costs are low. A thirty-line debounce function that your team wrote, that lives in your codebase, that you can read and modify and unit test, has zero supply chain risk and zero version migration overhead. The tradeoff shifts in favor of the library only when the utility is complex, security-sensitive, or evolves frequently with the ecosystem.
Distinguishing Polyfill-Style Libraries from Capability Libraries
Not all dependencies carry equal long-term risk, and the category distinction matters for how you should plan your dependency lifecycle. Polyfill-style libraries exist to fill gaps in the platform API — they implement functionality that browser vendors or the Node.js team have not yet shipped natively. Capability libraries provide functionality that the platform will never offer natively — payment processing, authentication services, complex cryptography, specific data format parsers, or rich UI interactions.
Polyfill-style libraries have a predictable lifecycle: they become redundant. node-fetch filled the gap before native fetch arrived in Node.js 18. uuid filled the gap before crypto.randomUUID() was added. cross-env filled the gap before Node.js standardized environment variable syntax across platforms. Every polyfill-style library you install should come with a mental note: this has an expiration date. At some point, the platform catches up, and the library transitions from essential to dead weight. The smart practice is to track your polyfill dependencies separately and review them each time you upgrade Node.js or bump your browser compatibility targets. When the platform ships the native equivalent, the library should be removed within a sprint.
Capability libraries have a different profile: they're long-term dependencies that should be selected with more care precisely because you're committing to them for years. The Stripe JavaScript SDK, authentication libraries like next-auth, database ORMs like Prisma or Drizzle, and validation frameworks like Zod are all in this category. These libraries solve problems that the platform won't solve — processing payments, managing sessions, querying databases with type safety. Evaluating these with the same five-question framework still applies, but the additional question is: how stable is the API, and how actively maintained is it for the long haul? A capability library you adopt in a greenfield project will likely still be in your codebase five years later.
The strategic implication is that polyfill libraries should be treated as temporary scaffolding — installed with awareness that they'll be removed — while capability libraries should be treated as architecture decisions. The due diligence appropriate for each is different. Installing uuid as a polyfill deserves thirty seconds of consideration. Choosing your ORM or authentication system deserves an hour of evaluation across multiple alternatives. Many teams conflate these categories and apply either too much or too little scrutiny to each.
The Copy-Paste-With-Attribution Alternative
Between "write it from scratch" and "install a library" there is a third option that JavaScript developers rarely discuss explicitly: copy a specific implementation from a well-maintained open source library, paste it into your codebase, and maintain it yourself. This is not plagiarism — most npm packages use permissive licenses (MIT, ISC, Apache-2.0) that explicitly permit this. It is a legitimate engineering decision with a specific tradeoff profile that makes it the right choice in a narrow but real set of circumstances.
The circumstances where this makes sense: the utility is small (under 50 lines), stable (the algorithm doesn't change with ecosystem evolution), and used in only a few places in your codebase. A debounce implementation, a throttle function, a simple deep-equality check, a slug generator — these are candidates. The logic is established and well-tested in the source library. Copying it means you own the code entirely, with no version to upgrade, no security surface to monitor, and no dependency to audit. The license file in your repository should include an attribution comment in the copied code citing the source and license.
The cases where this does not work: the utility is large, evolves frequently (a validation library that tracks TypeScript version changes), or has significant edge case complexity (timezone handling, cryptography, internationalization). For these, copying creates more maintenance burden than a dependency would. The signal that copy-paste is appropriate is that you can read the source code, understand it completely, and be confident you won't need to update it for anything other than a bug fix.
This is a tool that experienced engineers use regularly and that senior developers sometimes forget to mention as an option to more junior colleagues, because it lives in an uncomfortable space between "write your own" and "install a library." Making it an explicit option in code review and architecture discussions normalizes it and gives teams a practical middle path for the category of problems where it fits.
How to Run a Dependency Audit That Actually Reduces Bloat
Most projects accumulate dependency debt gradually and invisibly. The audit that surfaces this debt is a structured process, not a one-time grep. Running it annually — or whenever a project has a dedicated performance sprint — typically reveals five to fifteen packages that can be removed or replaced with no behavior change.
The first pass is automated: run npx depcheck to identify packages that are listed in package.json but have no import anywhere in the codebase. These are unambiguously removable — no evaluation needed, no behavior change risk. Depcheck has a false negative rate (it misses some imports, particularly dynamic ones), so cross-reference flagged packages against your actual usage before removing, but in practice most of what it finds is genuinely unused. devDependencies are especially likely to have orphaned packages from tools that were evaluated but not adopted, or build scripts that were retired.
The second pass checks utilization: for each large direct production dependency (anything over 10KB gzipped), determine how many of its exports your codebase actually uses. For a utility library, this means running a grep across the import sites and listing the imported names. If you're importing two functions from a library that exports two hundred, the question is whether those two functions justify the maintenance overhead. If they can be inlined in under thirty lines, remove the library. If they're complex enough to justify the library's presence, keep it — but now you know the utilization ratio.
The third pass looks for redundancy: are there two libraries in your codebase that solve overlapping problems? Two date libraries (moment imported by an old component, day-js used everywhere else), two HTTP clients (axios and fetch wrappers), two form validation approaches. Redundant libraries are common in projects that have been developed by multiple teams or that underwent partial migrations. The fix is to standardize on one and migrate the remaining call sites — usually a one-sprint project.
The fourth pass checks for native replacements: enumerate your utility packages and cross-reference them against what Node.js 22 and modern browsers now provide natively. The list has grown substantially since 2020. cross-fetch, node-fetch, uuid (v4), mkdirp, rimraf, object-assign, array-from, isomorphic-fetch — all of these have native equivalents in 2026. Removing them reduces both bundle size and supply chain exposure simultaneously.
The Long-Term Compounding Cost That Kills Developer Velocity
Dependency bloat doesn't hurt immediately — it hurts three years later when onboarding a new developer takes two hours instead of forty minutes because the codebase has eleven ways to format a date, four HTTP client wrappers, and six validation approaches accumulated across different eras of development. The compounding effect of undisciplined dependency addition is a codebase where no one person understands why each dependency exists, and removing any one of them feels risky enough that no one tries.
The maintenance overhead of a large dependency graph is real and measurable. Every dependency must be monitored for security advisories, evaluated when major versions ship breaking changes, tested when peer dependency requirements conflict, and understood by every developer who encounters it at an import site. A senior developer on a team of five who has to review npm audit output, evaluate Renovate PRs, and investigate transitive dependency conflicts spends meaningfully more time on this when the project has 120 direct dependencies than when it has 40. That time doesn't show up in any sprint board, but it accumulates into days per month across the team.
The counterintuitive finding from teams that have done aggressive dependency reduction is that removing packages rarely breaks things as badly as feared. Most packages that have been in a codebase for two or more years are either genuinely necessary — in which case the removal reveals clearly what breaks and why — or partially redundant with something else already in the codebase. The audit process that surfaces these redundancies is not about minimalism for its own sake. It is about keeping the codebase in a state where the team can reason about it, change it confidently, and onboard new members efficiently. That state has a direct impact on shipping speed, and the correlation between leaner dependency trees and higher development velocity is consistent enough to treat as a working hypothesis worth testing on your own project.
Analyze any npm package's bundle impact at PkgPulse.
See also: AVA vs Jest and Why Every Project Should Start with Fewer Dependencies, npm Dependency Trees: Most Nested Packages 2026.
See the live comparison
View pnpm vs. npm on PkgPulse →