npm Packages with the Best Health Scores (And Why) 2026
TL;DR
The best-maintained npm packages share 4 properties: active releases, responsive maintainers, zero long-term security vulnerabilities, and growing (not just large) download counts. PkgPulse health scores weight these factors across maintenance, community, popularity, and security dimensions. The packages that consistently score 90+ aren't necessarily the most popular — they're the ones where maintainers are clearly invested and the community is engaged.
Key Takeaways
- Health score ≠ download count — many high-download packages score poorly (CRA: 102K stars, low health)
- Four dimensions: maintenance (40%), community (25%), popularity (20%), security (15%)
- Maintenance matters most — release cadence, issue response time, active contributors
- Growing velocity beats raw downloads — +15% MoM shows real adoption momentum
- The best packages often aren't the most famous — niche tools maintained by dedicated teams
What Makes a High Health Score
PkgPulse Health Score Components:
Maintenance (40%):
├── Release recency (last release date)
├── Release frequency (commits/releases per quarter)
├── Issue response time (average time to first response)
├── PR merge rate (% of PRs reviewed within 30 days)
└── Contributor count (bus factor — number of active contributors)
Community (25%):
├── Stars growth rate (not absolute count)
├── Documentation quality (README score, dedicated docs site)
├── Ecosystem integrations (plugins, adapters, compatible tools)
└── Discussion activity (GitHub Discussions, Discord)
Popularity (20%):
├── Weekly downloads (absolute)
├── Download velocity (week-over-week, month-over-month trend)
├── Usage in popular projects (detected via GitHub dependency graph)
└── npm dependent packages count
Security (15%):
├── Open vulnerability count (CVEs)
├── Time to patch CVEs (history)
├── Dependency vulnerability exposure
└── Provenance attestation (signed releases)
Score: 0-100. 90+ = excellent. 75+ = good. Below 60 = investigate before using.
Category: Build Tools (Highest Scores)
Vite — 97/100
# Why near-perfect:
# - Weekly releases or near-weekly
# - Active core team (Evan You + dedicated contributors)
# - Response time: issues triaged within 48h
# - Security: zero long-standing CVEs
# - Growth: +32% YoY, consistently upward curve
# - Ecosystem: 1000+ plugins, used in SvelteKit, Astro, Nuxt default
npm install -D vite
# 15M weekly downloads — every metric heading in the right direction
Vitest — 96/100
# Why excellent:
# - Part of Vite team — same release cadence
# - Fastest-growing test runner: +175% YoY
# - Issues closed same day (small focused team, high velocity)
# - Zero dependency vulnerabilities in core
# - TypeScript-first: no separate @types/ needed
npm install -D vitest
esbuild — 94/100
# Why excellent (despite rare releases):
# - "Intentionally stable" — feature-complete, not stagnant
# - Security: zero CVEs (Golang, not Node.js ecosystem risk)
# - Used internally by most bundlers: Vite, tsup, many more
# - Bug fixes shipped promptly
# - Maintained by one dedicated author (Evan Wallace) with clear roadmap
npm install -D esbuild
Category: Testing (Highest Scores)
Playwright — 95/100
# Why excellent:
# - Microsoft-backed: full-time team
# - Monthly releases, detailed changelogs
# - 25K+ GitHub issues closed, most within weeks
# - Shiplap documentation site updated with every release
# - Growing: +85% YoY as E2E testing becomes standard
# - Security: regularly audited by Microsoft security team
npm install -D @playwright/test
Testing Library — 93/100
# @testing-library/react — consistently excellent
# - Active core team, consistent releases
# - Philosophy-driven: tests that mirror user behavior
# - Zero major security issues in history
# - 5M+ weekly downloads and growing
# - The standard in React testing: shadcn/ui, create-t3-app all use it
npm install -D @testing-library/react @testing-library/user-event
Category: State Management (Highest Scores)
Zustand — 95/100
# Why excellent:
# - Tiny team with extremely high responsiveness
# - Releases monthly, never misses critical bugs
# - Zero runtime dependencies (not even React peer dep issues)
# - Bundle: 2KB gzipped
# - 8M weekly downloads, +25% YoY growth
# - Community: Pmndrs team transparent about roadmap
npm install zustand
Jotai — 93/100
# Same team as Zustand (Pmndrs / Daishi Kato)
# Same release discipline: monthly, responsive
# TypeScript-first design
# 3.5M weekly downloads, growing
npm install jotai
TanStack Query — 95/100
# TanStack: extremely high-quality maintenance culture
# - Tanner Linsley + full team, full-time open source
# - React Query v5 shipped with breaking changes but perfect migration guide
# - Issues: most critical ones addressed within 24-48h
# - 10M+ weekly downloads, industry standard for server state
# - Every major framework has an adapter
npm install @tanstack/react-query
Category: Validation (Highest Scores)
Zod — 94/100
# Why excellent:
# - Colin McDonnell maintaining consistently
# - v3 was stable for 2 years with steady improvements
# - 14M+ weekly downloads
# - Ecosystem: first-class support in tRPC, Conform, Drizzle, React Hook Form
# - Security: pure validation library, no network/IO risk
# - TypeScript inference is best-in-class
npm install zod
Valibot — 91/100
# New but impressive health from day 1:
# - Active development: weekly releases
# - Fabian Hiller (creator) very responsive to issues
# - Growing rapidly: +480% YoY
# - Tree-shakeable design = no dead code
# - API compatibility with Zod attracting migrations
npm install valibot
Category: Styling (Highest Scores)
Tailwind CSS — 96/100
# Why excellent:
# - Full-time team at Tailwind Labs
# - Tailwind v4 shipped with zero-config, Vite plugin
# - Issue response: within hours for bugs
# - 45M+ weekly downloads, dominant in its category
# - Actively supporting RSC, Astro, SvelteKit, all major frameworks
npm install -D tailwindcss
CSS Modules (built-in, no npm) — N/A
/* Built into Vite, Next.js, SvelteKit — no health score needed */
/* Zero external dependency = infinite health */
Category: Frameworks (Highest Scores)
Next.js — 95/100
# Vercel-backed: full-time team of 50+ engineers
# - Releases every 2-4 weeks
# - Issues triaged same day (high volume, but dedicated team)
# - Security: CVEs patched within 24-48h
# - 8M+ weekly downloads, growing
# - RSC implementation actively iterated
npm create next-app@latest
Hono — 95/100
# Small, focused, high-velocity:
# - Yusuke Wada + growing contributor base
# - Releases weekly
# - Issues: 24-48h response typical
# - Zero compromise on bundle size
# - Growing 195% YoY with clear roadmap
npm install hono
Fastify — 93/100
# Enterprise-grade maintenance:
# - OpenJS Foundation project (institutional backing)
# - LTS releases with defined support windows
# - Security team with formal disclosure process
# - Used in production by: nearForm, Tier, and dozens of enterprises
# - 4M+ weekly downloads, stable growth
npm install fastify
Category: ORMs (Highest Scores)
Drizzle ORM — 94/100
# Small team, exceptional responsiveness:
# - Andrew Sherman + team actively pushing weekly releases
# - Community: largest Discord of any new ORM (20K+ members)
# - Issues addressed quickly: avg response < 2 days
# - No legacy debt: built for TypeScript from day 1
# - Growing: +220% YoY
npm install drizzle-orm drizzle-kit
Prisma — 91/100
# Corporate-backed ORM:
# - Prisma team of 50+ engineers
# - Monthly major releases, weekly patches
# - Prisma 6: performance improvements addressing earlier criticisms
# - 3M+ weekly downloads
# - Docs are best-in-class in any ORM
npm install prisma @prisma/client
The Common Thread
What the highest-scoring packages share:
1. Dedicated maintainers with clear ownership
→ Not committee-by-committee; one or a few people who care deeply
2. Release discipline
→ Regular releases on a predictable schedule
→ Not "when it's done" (leads to long gaps)
3. Issue triage culture
→ First response within 48-72 hours, even if it's "we'll look at this"
→ Bugs triaged by severity, P0 patched in days not months
4. Zero-security-debt philosophy
→ CVEs addressed immediately, not deferred
→ Proactive dependency updates
5. Growing community, not just large community
→ Discord/GitHub Discussions active
→ External contributors welcomed with good PR reviews
6. TypeScript-first or excellent TypeScript support
→ Types ship in the package, not in @types/
→ Types are accurate and well-tested
The packages that score below 70:
→ Single maintainer who's moved on
→ Open CVEs sitting for months
→ Issues with zero response for weeks
→ Dependency on deprecated packages
→ Last release 12+ months ago with activity showing as "maintenance"
What High Health Score Packages Have in Common
Looking across the packages that consistently score 90+ on PkgPulse health metrics, several patterns repeat regardless of category.
Regular release cadence is the most reliable differentiator. The top-scoring packages ship minor and patch releases every two to four weeks — not major releases once a year with silence in between. Vite, Vitest, Zustand, Hono, and Drizzle all follow this pattern. Frequent releases signal that the maintainers are actively using and improving the package, not just responding to issues when they accumulate. It also means bugs get fixed faster and the release process is practiced enough that shipping a patch is low-overhead for the team.
TypeScript-native types are universal among high-scoring packages in 2026. Every package on the list ships its own .d.ts files — none rely on the DefinitelyTyped community types in @types/. This matters for type accuracy (the package author understands the types better than a third party), for release alignment (types update with the package, not on a separate schedule), and as a signal of quality: packages that care enough to own their types tend to care about the whole developer experience.
Zero or minimal runtime dependencies is the second-most consistent pattern. The highest-scoring packages have zero to two runtime dependencies. Zustand has zero. esbuild has zero (it ships pre-compiled binaries rather than pulling in a Node.js dependency chain). Valibot was designed from the start to be tree-shakeable with no runtime deps. Fewer transitive dependencies means a smaller attack surface for the security score component, but it also means fewer packages that can go unmaintained beneath yours.
Active GitHub engagement — issues triaged within days, PRs reviewed within a week — shows up consistently. This is measurable and is a leading indicator of whether a package will still be healthy in two years.
The correlation with download velocity is not coincidental. Packages that score highly on health metrics are almost universally gaining downloads, not holding steady or declining. Health is a leading indicator of adoption momentum.
Using Health Scores in Your Dependency Evaluation Process
Health scores are most useful as tiebreakers and as early-warning signals, not as absolute cutoffs.
The tiebreaker use case: when two packages solve the same problem comparably well, health score is the right tiebreaker. If a high-health package has a slightly less ergonomic API than a low-health competitor, the right choice depends on your time horizon. For a package you will use for three or more years — an ORM, a state management library, a validation layer — maintainability matters more than a marginal API preference today. A package that is 15% easier to use but losing active maintainers is a maintenance liability you will pay for during a future forced migration.
The automated early-warning use case: add a health check step to your CI pipeline that fetches PkgPulse health scores for your direct dependencies and fails or warns if any drop below a threshold — 70 out of 100 is a reasonable starting point. This catches gradual health decline. A package that scored 88 when you added it two years ago and now scores 52 has been slowly abandoned, and catching that early gives you time to plan a migration rather than scrambling when a breaking CVE drops and there is no patched version.
The signal that deserves the most weight: download velocity trend rather than absolute download count. A package at 2 million weekly downloads and losing 20% month-over-month is showing ecosystem abandonment before the maintainer formally deprecates it or the README gets a "looking for new maintainer" notice. The ecosystem is a leading indicator. Developers doing greenfield projects stop recommending a package before existing users migrate away, so the velocity inflects downward first. Catching that inflection point — when a package is still technically maintained and your existing code still works — is when migration is cheapest.
Why Health Score Dimensions Are Weighted the Way They Are
The weighting of maintenance at 40% is not an arbitrary decision — it reflects a specific empirical finding about what predicts long-term package viability. In a retrospective analysis of packages that went from healthy to abandoned, the maintenance signals deteriorated first, before download counts declined, before community activity slowed, and well before security vulnerabilities accumulated. Release cadence drop-off and issue response time increase are the earliest measurable signals that a maintainer is disengaging. By the time a package's download count shows meaningful decline, the abandonment has usually been underway for six to eighteen months.
The 25% community weighting captures a different kind of health than maintenance: the health of the package's ecosystem around it. A package can have an excellent, committed maintainer and still be fragile if nobody outside the core author is contributing, no plugins or integrations extend it, and no community documentation exists beyond the official README. Packages with strong community health survive maintainer transitions — they have enough external contributors and institutional knowledge that a new maintainer can step in without the project stalling. Packages with low community health and a single committed author are one burnout or career change away from abandonment, regardless of how healthy the current maintenance metrics look.
Security at 15% reflects not the importance of security — it is critically important — but the base rate at which the most maintained packages accumulate open CVEs. Well-maintained packages rarely carry open vulnerabilities for long; their security score is high almost automatically. The security dimension is most useful at the low end of the scale, where it surfaces packages that have known vulnerabilities sitting unpatched for months and where maintainers have demonstrably stopped addressing security disclosures. For the high-scoring packages in this article, security is not a differentiator; it is a baseline that all of them meet.
The decision not to weight download count more heavily deserves explanation, because most competing package evaluators treat popularity as the primary signal. Raw download count has a compounding selection bias problem: packages become popular partly because they were already popular, not purely because they are the best option. A package that was the default recommendation in a popular tutorial series from three years ago carries download volume that no longer reflects active developer preference. Download velocity trend at a shorter window — month-over-month growth — is a much cleaner signal of current adoption momentum than the absolute number, because it reflects what developers are choosing now rather than what was installed into three-year-old projects that still run npm install.
How Health Scores Diverge from Raw Download Counts
The most instructive comparisons in package health analysis are the cases where health score and download count point in opposite directions. These divergences reveal where raw popularity is a misleading proxy for current project health, and they are where the health score framework provides the most decision-relevant signal.
Create React App is the canonical example. At peak, CRA had over one hundred thousand GitHub stars and millions of weekly downloads. Its health score would have fallen steeply in 2022 and 2023 because the maintenance signals — issue response time, release cadence, active contributors — all degraded well before the official deprecation announcement. The download count remained high because existing projects continued npm install-ing it, but new project creation had already shifted to Vite. The divergence appeared in velocity: CRA's week-over-week downloads were declining while its absolute count remained impressively large. A health score that weighted maintenance and velocity appropriately flagged this earlier than any headline announcement.
Moment.js shows the same pattern from a different angle. Moment's download count in 2024 remains one of the highest on npm — it is installed in an enormous number of legacy projects that run npm install regularly. But its health score reflects the reality that its maintainers formally declared it in maintenance mode, the issue queue is not accepting new feature requests, and the download velocity for new projects is a small fraction of its legacy install base. The divergence between legacy-install-driven download count and genuinely new adoption is exactly what download velocity trend captures: Moment's absolute count is high, its month-over-month velocity from new projects is near zero.
The opposite divergence also occurs: packages with relatively modest download counts but excellent health scores. A specialized database driver for a niche database, a utility library for a specific domain like financial calculations or geospatial operations, or a component library for a specific framework may have annual download counts that would look unremarkable in isolation but health scores that reflect intensive, responsive maintenance serving a well-defined user community. For domain-specific package choices, the health score is far more informative than the download count because the download universe for that domain is bounded — a package serving ten thousand weekly downloads in a category where the total addressable download base is fifty thousand is achieving high penetration, not low popularity.
The Relationship Between Organizational Backing and Health Scores
There is a meaningful but not deterministic correlation between corporate or foundation backing and package health scores. Backed packages — Vite (Evan You's full-time work, VoidZero funding), Playwright (Microsoft), Tailwind (Tailwind Labs commercial entity), Next.js (Vercel), TanStack Query (Tanner Linsley's company) — achieve their high scores partly because the funding model removes the maintainer sustainability problem that kills otherwise excellent packages. Full-time maintainers can triage issues during working hours, respond to security disclosures within SLA windows, and plan releases systematically rather than squeezing maintenance into weekend hours.
However, organizational backing is neither sufficient nor necessary for high health scores. Several of the highest-scoring packages are maintained by small teams without formal corporate backing but with clear commercial incentive through sponsorship, consulting, or the author's own professional use of the tool. Zod is Colin McDonnell's project, and while it has reached the scale where it likely has significant sponsorship, it was not backed by a company when it achieved its current dominance. The Pmndrs packages — Zustand, Jotai, React Three Fiber — are maintained under a loose collective organization that has no commercial entity behind it. Their health scores are excellent because the maintainers are active, opinionated, and personally invested in the packages' quality.
The warning flag is the inverse case: organizational backing combined with unclear internal ownership. Large company open-source projects sometimes score poorly on maintenance metrics because the internal team is understaffed, distracted by commercial priorities, or structured such that no one person has clear ownership of the response queue. The question to ask about backed packages is not "is it funded?" but "is there a named person or small named team who owns this?" Diffuse ownership at scale produces the same abandonment signatures as solo maintainer burnout, just for different reasons.
What Packages That Maintain High Scores Over Time Do Differently
Some packages achieve a high health score at launch and then watch it decline as the initial burst of maintainer energy is not sustained. Others maintain their scores over years through disciplined processes that are observable and learnable. The differences are worth examining because they reveal what sustainable open-source maintenance actually looks like in practice.
Release automation is the most consistent differentiator. Packages that maintain high maintenance scores over time almost universally have automated release pipelines: commits following conventional commit format trigger automated changelog generation, version bumping, and publishing. The manual overhead of preparing a release is nearly zero, which means there is no activation energy barrier to shipping a patch when a bug is fixed. Packages that require manual release preparation — assembling a changelog by hand, running build steps in the right order, remembering to tag the release — drift toward longer release intervals as that overhead accumulates. The correlation between "has automated release pipeline" and "maintains consistent release cadence" is strong enough that it is worth checking directly when evaluating a package's sustainability.
Issue triage discipline separates high-health packages from packages that have good code but poor community management. The packages that maintain excellent issue response time metrics do so through systematic first-response processes: applying labels within 24 hours of issue creation to indicate the category and priority, acknowledging bugs with a reproduction request before committing to a fix timeline, and closing duplicate issues with a pointer to the canonical thread. This process does not require more total maintainer time than an ad-hoc approach — it may require less, because categorized and labeled issues are easier to batch-process. But it produces dramatically better response time metrics and signals to new contributors that the repository is well-managed, which in turn attracts more contributors to help with the triage load.
The packages with the longest sustained high health scores have all made the transition from being one person's project to being a community that the original author helps govern. This transition is visible in contributor diversity metrics: packages where the top five contributors include people outside the original author's organization are more resilient than packages where the core team is effectively a single person regardless of how many occasional contributors submit small patches. Zustand has achieved this — multiple active contributors outside Daishi Kato's immediate network contribute meaningful work. Vite has achieved it decisively with a core team distributed across multiple companies. Hono is in the transition, with Yusuke Wada actively cultivating a contributor community that can sustain the project's velocity as it grows.
Gaming Risk: How Health Score Manipulation Manifests
No metric system exists without gaming pressure, and health scores are no exception. Understanding how gaming manifests in package health metrics helps you identify when a package's score is not reflecting genuine project health.
Download count manipulation is the most commonly attempted vector and the most easily detected. Coordinated download bots can inflate weekly download numbers — npm does not filter all bot traffic — and a sudden jump in downloads with no corresponding community activity (no new GitHub stars, no new issues, no new dependents in the dependency graph) is a clear signal. Download velocity manipulation is harder to sustain because the natural metric fluctuations in genuine user adoption look different from bot-generated flat-rate increments.
Star inflation is better understood publicly because it is more visible. Services that sell GitHub stars have been around for years and are specifically excluded from health score calculations by using star growth rate at a recent window rather than absolute count, and by cross-referencing against dependency graph usage. A package with forty thousand stars and three hundred weekly downloads is showing an obvious decoupling between stated popularity and actual use.
Release frequency gaming is subtler and less common, but it occurs when maintainers ship trivial patch releases — version bumps, README corrections, dependency updates with no actual code changes — to maintain the appearance of active development in the release cadence metric. The counter-signal is commit quality rather than commit frequency: a release history full of single-line diff releases with no substance is a weaker signal than monthly releases that each contain meaningful work. Any serious evaluation of a package should spend five minutes reading the last six months of changelog entries to assess whether the release cadence represents real investment.
See health scores for any npm package at PkgPulse.
See also: How GitHub Stars Mislead Package Selection and Package Maintenance Scores: Who, The Average Lifespan of an npm Package.
See the live comparison
View zustand vs. jotai on PkgPulse →