Package Maintenance Scores: Who's Keeping Up? 2026
TL;DR
Maintenance quality is the single best predictor of a package's long-term reliability — more than stars, downloads, or age. A package with 100K downloads and weekly releases beats one with 5M downloads and no commits in 18 months. Four signals matter: release recency, issue response time, dependency freshness, and contributor activity. Packages that score high on all four are safe to build on; packages that fail more than two of them warrant a migration plan.
Key Takeaways
- Release recency — last release date signals maintainer activity better than any other metric
- Issue response time — responsive maintainers fix bugs fast; silent ones don't
- Dependency freshness — outdated transitive deps accumulate security debt silently
- Contributor count — single-maintainer packages have high bus factor risk
- Packages in "maintenance mode" are intentionally stable, not abandoned — context matters
The Four Maintenance Signals
Signal 1: Release Recency
# Check last release date:
npm view package-name time --json | tail -5
# Or: npmjs.com/package/package-name → shows "last published" prominently
# What the date tells you:
# < 30 days: active development
# 1-3 months: healthy cadence for stable libraries
# 3-6 months: watch closely — is this intentional stability or drift?
# 6-12 months: investigate — check GitHub for activity
# > 12 months: likely stagnant or intentionally stable
# The "intentionally stable" exception:
# lodash: rare updates because it's feature-complete, not abandoned
# uuid: stable utilities don't need frequent releases
# semver: specification-driven, changes slowly by design
# These are fine. The red flag is: active use library + no releases
# Check release history (not just latest):
npm view package-name --json | jq '.time | to_entries | last(.[])? | .key'
Signal 2: Issue Response Time
# How to check on GitHub:
# github.com/org/repo/issues?q=is:open
# Look for:
# ✅ Issues with maintainer responses within 1-2 weeks
# ✅ Bug reports with "confirmed" or "investigating" labels
# ✅ Recent closed issues (within last 3 months)
# Red flags:
# ❌ 100+ open issues, newest response 6+ months ago
# ❌ Security issues labeled but no response
# ❌ PR with "LGTM" from contributors but no maintainer review for months
# ❌ Repo says "we're looking for maintainers"
# Automation tools that measure this:
# https://isitmaintained.com/ — shows: % open issues, average resolution time
# github.com/org/repo/pulse — activity summary last 30 days
Signal 3: Dependency Freshness
# Your package's own dependencies can be outdated
# Check with npm audit in the package's repo:
git clone https://github.com/org/package
cd package && npm audit
# Or check programmatically:
npm view package-name dependencies
# Then check if each listed dependency is on a current version
# Common pattern: package uses outdated deps with known vulnerabilities
# The package itself has no vulnerabilities but SHIPS vulnerable deps
# npm audit will catch this: "HIGH severity in package > dep > sub-dep"
# npm overrides: patch it yourself without waiting
{
"overrides": {
"semver": ">=7.5.2" // Force patch a transitive vulnerable dep
}
}
Signal 4: Contributor Activity
# Bus factor: how many people could be hit by a bus and kill the project?
# Check on GitHub:
# github.com/org/repo/graphs/contributors
# Healthy patterns:
# ✅ 5+ active contributors in last 6 months
# ✅ Mix of maintainers + external contributors
# ✅ Code review happening on PRs (not just owner merging)
# Risky patterns:
# ⚠️ Single maintainer, high-use package
# ⚠️ Corporate-sponsored project that went quiet (company interest changed)
# ⚠️ Open PRs from contributors, never merged
# High-risk examples (historically):
# - event-stream (2018): single maintainer transferred to malicious actor
# - node-ipc (2022): single maintainer added protestware deliberately
# - left-pad (2016): single maintainer unpublished, broke the internet
Maintenance Score Examples
Tier A: Excellent Maintenance
# Vite — maintenance score: 97/100
# Release cadence: weekly or biweekly
# Issue response: < 24 hours on most issues
# Contributor count: 15+ active contributors
# Dependency health: always fresh
# Corporate backing: Vercel + multiple companies employ contributors
# Fastify — maintenance score: 95/100
# OpenJS Foundation project
# LTS releases: defined support windows (like Node.js)
# Security team: formal CVE disclosure process
# Issue SLA: critical bugs fixed within 24-48h
# Enterprise support available
# Zustand — maintenance score: 95/100
# Small but dedicated: Daishi Kato + 3-4 regular contributors
# Responsive: GitHub issues typically answered within 1-3 days
# Releases: monthly, no missed months in 2 years
# Deps: zero runtime dependencies (nothing to go stale)
Tier B: Good Maintenance
# Express — maintenance score: 75/100
# "Maintenance mode" but with caveats:
# - Security patches: YES, typically within weeks
# - Feature development: NO (intentional freeze)
# - New APIs: NO
# - Node.js compatibility: maintained
# Assessment: Safe to use, will not get new features
# Webpack — maintenance score: 72/100
# - Still releasing (v5.x patches)
# - Issue response slowed vs 2020-2022
# - Core contributors reduced
# - Main dev focus shifted to Rspack (at Bytedance)
# Assessment: Fine for existing projects, evaluate alternatives for new ones
# Moment.js — maintenance score: 65/100
# Explicitly in "maintenance mode" since 2020
# - Security patches: yes
# - New features: no
# - Official recommendation: don't use for new projects
# Assessment: Your legacy app is fine; don't add new Moment usage
Tier C: Concerning Maintenance
# Create React App — maintenance score: 25/100
# - DEPRECATED (official React docs removed it)
# - Last release: 2022
# - Security vulnerabilities: unpatched
# - Maintainer activity: near zero
# Assessment: Do not use. Migrate to Vite.
# Bower — maintenance score: 5/100
# - Dead since 2018
# - No releases, no maintenance
# - Only downloaded as transitive dep of very old tooling
# Assessment: Remove all bower.json usage immediately
Maintenance Quality by Package Category
Package categories ranked by average maintenance quality (2026):
1. Build tools: 88/100 avg
→ High activity: Vite, Rollup, esbuild, Rspack all well-maintained
→ Tooling companies (Vercel, ByteDance) investing heavily
2. Testing frameworks: 86/100 avg
→ Vitest, Playwright, Testing Library all excellent
→ Jest: lower score (slower pace since Facebook reduced investment)
3. State management: 84/100 avg
→ TanStack, Pmndrs (Zustand/Jotai), Valtio all high-quality
→ Redux Toolkit still high quality
4. HTTP clients: 82/100 avg
→ Most are mature and well-maintained
→ Some older ones declining (request, got v11 chaos)
5. Date libraries: 78/100 avg
→ Day.js, date-fns excellent
→ Moment.js in maintenance mode, bringing down average
6. CSS-in-JS: 72/100 avg
→ Panda CSS, Stitches new and high quality
→ Emotion, styled-components declining with RSC adoption
7. Older Express middleware: 55/100 avg
→ Many middleware packages haven't been updated in years
→ body-parser, compression, helmet: varying maintenance
Automated Maintenance Monitoring
// Stay informed about maintenance changes:
// 1. Dependabot (GitHub)
// .github/dependabot.yml
version: 2
updates:
- package-ecosystem: npm
directory: "/"
schedule:
interval: weekly
# Automatically opens PRs when deps have updates
# Security updates: immediate
// 2. Renovate (more powerful)
// renovate.json
{
"extends": ["config:base"],
"schedule": ["before 9am on Monday"],
"automerge": true, // Auto-merge patch/minor
"packageRules": [{
"matchDepTypes": ["devDependencies"],
"automerge": true
}]
}
// 3. Socket.dev monitoring
// npm install -g @socket/cli
// socket scan create ← monitors for changes in your deps
// Alerts when: new maintainer, suspicious code added, CVE discovered
The Maintenance vs Feature Trade-Off
Developers often conflate:
"No new features" with "abandoned"
They're different:
Intentionally stable (safe):
- lodash: feature-complete since ~2019
- semver: follows a specification
- uuid: low-level, doesn't need changes
- Express: in maintenance mode, security still patched
Abandoned (unsafe):
- Create React App: deprecated, vulnerabilities unfixed
- Bower: dead, no activity
- node-fetch v2: CJS-only in ESM world, maintainer moved to v3
- request: explicitly deprecated by maintainer
How to tell the difference:
→ Read the README — does it say "maintenance mode" explicitly?
→ Check: are SECURITY issues being patched?
→ Is there a recommended migration path?
→ Are issues being TRIAGED (even if not resolved)?
"Maintenance mode" with security patches = acceptable
"Abandoned" with open CVEs = must migrate
Maintenance Trends by Ecosystem Segment (2026)
Not all corners of the npm ecosystem age equally. Looking across categories, the gap between the best-maintained and worst-maintained segments has widened in 2025–2026 as developer attention concentrates around TypeScript-first tooling and disperses from legacy utility packages.
Best-maintained categories:
Build tooling sits at the top. Vite, esbuild, Rollup, and Rspack are all under active development, backed by companies (Vercel, Bytedance, and the Vite community) with strong financial incentives to keep their tooling current. These packages see releases at a cadence measured in weeks, not months, and critical bugs get patched faster than almost any other category.
TypeScript-first libraries are the second strongest segment. Zod, tRPC, and Drizzle have communities that contribute back heavily — in part because TypeScript users tend to be precision-oriented developers who file good bug reports and submit well-formed PRs. The cultural emphasis on type accuracy creates a continuous maintenance incentive: any type regression is immediately visible to users and generates issues quickly.
Testing tools had a strong 2025–2026. Vitest's rapid ascent pulled maintainer energy and community investment into the testing category. Playwright and Testing Library both saw active development cycles. The main exception is Jest, which has slowed since Facebook reduced its direct investment — though it remains stable for existing users.
Worst-maintained categories:
The long tail of utility libraries from 2012–2018 is in visible decline. These packages solved problems that are now handled by JavaScript's standard library (array methods, string manipulation, simple date formatting) or by more modern alternatives. Their download counts remain high due to transitive dependencies, but maintainer activity is minimal.
jQuery plugins represent a large class of packages that peaked around 2014 and have been in slow decline since. Many are technically functional but carry outdated peer dependency declarations and accumulate security advisories without patches.
REST API client SDKs for services that no longer prioritize JavaScript (certain legacy AWS service wrappers, outdated payment processor clients) frequently go unmaintained as vendors shift focus to their first-party SDKs. If you're relying on a third-party SDK wrapper for an external service, checking its maintenance status is especially important — the official SDK almost always becomes the better choice.
Automated Maintenance Monitoring with Dependabot and Renovate
The practical answer to keeping up with package maintenance isn't manual review — it's automating the signal so you only have to act on what matters. Dependabot and Renovate are the two standard tools.
Dependabot (built into GitHub) creates pull requests for security patches and version bumps automatically. Configuration lives in .github/dependabot.yml. A minimal npm setup:
version: 2
updates:
- package-ecosystem: npm
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
Security updates bypass the schedule and open PRs immediately. Patch updates that pass CI can be configured to auto-merge, reducing the manual overhead to only major version bumps and breaking changes.
Renovate offers more configuration at the cost of complexity: grouping related updates into a single PR (all ESLint plugins in one PR), flexible scheduling, and Merge Confidence ratings based on ecosystem-wide adoption data. For teams with many dependencies, Renovate's grouping reduces PR noise significantly.
The maintenance signal that matters most for monitoring: packages with no new releases in 18+ months combined with unpatched security vulnerabilities. That combination is the high-priority flag. A package with no releases in two years but zero CVEs is likely intentionally stable — acceptable. A package with no releases and open security advisories needs replacement.
GitHub's Dependency Graph (in repository Insights → Dependency graph → Dependabot) shows which of your dependencies have known vulnerabilities and whether patches are available, without requiring any CLI setup. It's the fastest way to triage a codebase you've inherited.
Recommended workflow: weekly automated PRs for patches (Dependabot handles this automatically), monthly manual review of major version bumps flagged by Renovate, and a quarterly review of packages with low download velocity or no recent GitHub activity.
Download Velocity as a Misleading Maintenance Signal
Download counts are the most visible metric on npm and the one developers cite most often when evaluating whether a package is safe to adopt. The reasoning is intuitive: if a package gets 5 million downloads per week, the thinking goes, any serious problem would be caught and fixed quickly by the community. This logic has a structural flaw that makes it unreliable as a maintenance signal, and understanding why requires knowing how npm download counts actually accumulate.
A significant fraction of npm downloads are generated not by developers installing packages, but by automated systems. CI pipelines that run on every commit install the same packages repeatedly. Docker builds that don't cache node_modules install everything from scratch. CDN and mirror infrastructure that pre-fetches packages to reduce latency. Some estimates put automated downloads at 40–60% of total npm traffic. This means that a package with 5 million weekly downloads might have substantially fewer actual users than that number implies — and the ratio of automated to human downloads is higher for packages that are deeply embedded in popular toolchains.
Packages that are transitive dependencies of high-download tools accumulate download counts that reflect the tool's popularity, not the package's independent adoption. A package that Express 4.x depends on, that Webpack 4 depends on, and that Create React App (deprecated) depends on may still show millions of weekly downloads in 2026 — driven entirely by projects using those tools — while being entirely unmaintained and potentially containing unpatched vulnerabilities. The download count is an artifact of its embedding in popular legacy stacks, not evidence that anyone is actively evaluating or maintaining it.
The more reliable signal is download trend rather than absolute count: is the weekly download figure growing, stable, or declining? A declining trend for a package that isn't being deliberately replaced by something better in its ecosystem is a warning sign. In contrast, a package with 50,000 weekly downloads but a consistent 20% year-over-year growth trend is gaining mindshare in its category, which typically correlates with active maintainer investment. PkgPulse's trend visualization surfaces this distinction — the slope of adoption tells a different story than the raw count.
Commit Frequency vs Release Frequency: The Signal That Matters
Two packages can have identical release frequencies but radically different maintenance health depending on what's happening between releases. A package that releases once per month because its maintainers are actively reviewing PRs, fixing bugs, updating documentation, and keeping dependencies current is healthy. A package that releases once per month because one person is merging their own changes with no external contributors, no issue triage, and no dependency updates is fragile — it has a bus factor of one and no community to catch problems.
Commit frequency in the source repository is a more granular signal than release cadence. GitHub's contribution graph shows the rhythm of activity: are commits clustered around release dates (suggesting batched work), or distributed consistently throughout the month (suggesting active ongoing maintenance)? The former is fine for stable libraries where maintenance is genuinely light; it becomes concerning when there are open issues and PRs that have been sitting without response for weeks. The latter pattern — distributed, frequent commits — indicates a project where maintenance is happening continuously and issues are addressed as they arise.
The issue resolution rate is the most operationally meaningful of the four signals. A maintainer who is active on GitHub but never closes issues, or who closes issues with "won't fix" without explanation, creates a different experience for dependent projects than one who responds within a week, confirms bugs, and ships fixes promptly. GitHub's Pulse view (at github.com/org/repo/pulse/monthly) shows: how many issues were opened versus closed in the last 30 days, how many PRs were opened versus merged, and the number of unique contributors. A healthy project closes more issues than it opens over time; a declining project accumulates open issues faster than they're resolved.
The response time to security disclosures is a specialized version of this signal that deserves separate consideration. Check whether the repository has a SECURITY.md file documenting a disclosure process. Check the project's security advisories tab for past CVEs and whether patches were available within days or weeks of disclosure. A maintainer who responds to CVEs in 48 hours and ships fixes in a week is a fundamentally different operational partner than one who leaves security advisories open for months. For packages handling authentication, cryptography, HTTP parsing, or file system operations, this distinction is particularly consequential.
Institutional vs Individual Maintainers and Long-Term Stability
The source of funding and organizational backing behind a package is a structural factor in its long-term maintenance health that is often invisible in npm metadata but highly predictive. A package maintained by an individual developer as a side project has a different risk profile than one maintained by a company whose core business depends on it, even if the two packages have identical download counts and release frequencies today.
Individual-maintained packages — even excellent ones — have inherent bus factor risk. When the sole maintainer's job changes, their interests shift, they burn out, or they have a major life event, maintenance can drop precipitously and with little warning. The left-pad incident was the most dramatic version of this, but quieter versions happen constantly: maintainers gradually reduce their response frequency, stop merging PRs, and eventually stop monitoring the project at all, sometimes without any explicit announcement. The package continues to accumulate downloads for years through its embedding in dependency trees, while its security and compatibility health degrades.
Institutionally-backed packages have their own risk profile — one that's different but not uniformly better. A package whose development is funded by a company is subject to that company's strategic priorities. If the company pivots, is acquired, or loses interest in the open-source space, maintenance can end abruptly. Facebook's reduction of direct investment in Jest is a recent example: Jest remains maintained, but at a slower pace than during its peak investment years, and Vitest has absorbed much of the ecosystem energy as a result. The package didn't become unmaintained; it became less of a company priority, which changed its maintenance velocity.
The most durable maintenance situations are packages backed by foundations (OpenJS Foundation projects like Node.js, Fastify, and webpack), packages maintained by multiple companies with shared financial interests in keeping them healthy (Vite, where Vercel, Stackblitz, and multiple other organizations fund contributors), and packages with large enough contributor communities that they've achieved escape velocity from individual or single-company dependency. For mission-critical dependencies, evaluating this structural backing is as important as evaluating the current maintenance metrics — because what matters isn't just where the package is today, but whether there are structural incentives that will keep it healthy next year.
How to Read GitHub Contributor Graphs for Bus Factor Risk
The contributor insights page at github.com/org/repo/graphs/contributors visualizes each contributor's commit history over time as a series of bar graphs, one per contributor. Reading these graphs for bus factor risk is a skill that becomes fast with practice. The patterns to look for are specific and interpretable.
A healthy contributor graph for an actively maintained package shows multiple contributors with overlapping activity — several people committing across the same time periods, with no single contributor responsible for the majority of commits. When one contributor's bar graph dwarfs all others by a factor of 10 or more, that is a bus factor warning. Even if that contributor is currently active, any change in their involvement produces an immediate and substantial drop in maintenance capacity.
The temporal pattern matters as much as the current distribution. A contributor graph that shows five active contributors through 2023, then a gradual reduction to one through 2024, then low activity from that one through 2025 is a warning sign even if the project is still releasing. The project's community has contracted, and contraction tends to continue rather than reverse without an active effort to recruit and onboard new contributors. Conversely, a graph showing increasing contributor count over time — new bars appearing each year, existing contributors remaining active — indicates growing community investment, which is a strong long-term health signal.
GitHub's CODEOWNERS file and the list of people with merge access (visible to repository admins, but inferable from who actually merges PRs) shows the operational bus factor: not just who commits, but who can review and merge. A repository where only one person has merge access, regardless of how many external contributors open PRs, has a single-person bottleneck for all changes. PRs from the community pile up unmerged, contributors become discouraged, and the effective maintenance capacity is whatever that one person has time for. This pattern is visible in the PR age distribution: if most open PRs are more than 30 days old, it suggests a merge bottleneck.
For packages you're evaluating as long-term dependencies — ORMs, auth libraries, UI component libraries — spending five minutes on the contributor graph before adoption can surface risk that no amount of documentation reading or download count analysis would reveal. A package with great docs, 2 million weekly downloads, and a contributor graph showing one person responsible for 95% of commits in the last year is a dependency risk that warrants either a migration plan or a deliberate decision to accept the risk.
When to Act on a Poor Maintenance Score
Identifying that a package has declining maintenance is the easy part. The harder question is what to do about it, and the answer depends on context that a health score alone cannot capture. A poorly-maintained package that sits in your dependency tree as a transitive dependency — pulled in by another tool, never directly imported in your application code — is a different risk profile than a poorly-maintained package you import explicitly in dozens of files across your codebase.
For transitive dependencies with poor maintenance, the first step is checking whether a direct dependency update would pull in a newer version or replace the package entirely. Updating your direct dependency is almost always the right move when a security issue is present in a transitive package, and npm overrides (in npm 8.3+) lets you force a specific version of a transitive dependency without waiting for your direct dependency to update. This is a stopgap, not a solution, but it buys time to evaluate a proper migration.
For direct dependencies with declining health, the evaluation question is: does a better-maintained alternative exist, and what is the migration cost? If the alternative is a drop-in replacement with a compatible API — switching from moment to day-js, or from request to native fetch — the migration cost is measured in hours, not days, and is worth doing proactively before a security incident forces it under time pressure. If the migration requires a significant interface change — moving from one ORM to another, or replacing a UI component library — the cost is higher and should be sequenced into a dedicated migration project with proper planning.
The worst outcome is discovering a critical CVE in a poorly-maintained package during a production incident. Maintenance score monitoring exists specifically to create lead time: surfacing a package's declining health months or years before it becomes a security emergency, when the team can address it as planned work rather than crisis response. A quarterly review of your direct dependencies' maintenance signals, cross-referenced against what migration paths exist, is enough to keep this risk manageable.
Compare maintenance scores and health data for any npm package at PkgPulse.
See also: The Average Lifespan of an npm Package and npm Packages with the Best Health Scores (And Why), How GitHub Stars Mislead Package Selection.
See the live comparison
View fastify vs. express on PkgPulse →