Skip to main content

The Most Starred vs Most Downloaded: When GitHub 2026

·PkgPulse Team
0

TL;DR

GitHub stars and npm downloads are measuring entirely different things. Stars measure a moment of enthusiasm at a point in time. Downloads measure ongoing operational usage. The divergence between them — sometimes massive — reveals the lifecycle stages of packages: hype > adoption > commoditization > decline. The most interesting packages are the ones where the two metrics tell contradictory stories.

Key Takeaways

  • Stars peak 6-18 months after launch then plateau; downloads keep growing if adopted
  • Create React App: 102K stars, declining downloads — stars outlive the product
  • semver: zero GitHub hype, 95M dependents — critical infrastructure, not trendy
  • Zustand vs Redux: similar stars, Zustand growing faster in downloads
  • Best signal combination: growing downloads + reasonable stars + high issue close rate

The Four Quadrants

                      HIGH downloads
                           |
        Infrastructure    |    Winners
        (boring, critical)|    (both metrics growing)
        semver, lodash    |    React, Vite, Next.js
                          |    Zustand, Tailwind
LOW stars ────────────────┼──────────────────── HIGH stars
                          |
        Dead / Niche      |    Hype > Adoption
        (nobody using,    |    (Starred but not used)
         nobody excited)  |    Create React App (stars high, declining use)
                          |    Many "show HN" projects
                      LOW downloads

High Stars, Low/Declining Downloads

Create React App — 102K Stars, -35% Downloads YoY

# Why the divergence:
# Stars accumulated 2016-2021 when CRA was THE way to start React
# The React team officially deprecated it in 2022
# Stars: don't go down when a project is deprecated
# Downloads: declining as devs migrate to Vite

# The star-to-reality gap:
# Someone landing on github.com/facebook/create-react-app
# sees 102K stars and thinks "this must be the right choice"
# Reality: the React docs don't even mention CRA anymore

# What to do: Always check npm downloads + release date + README
# Stars alone are a trap

Various "Awesome List" Aggregators — Stars in Thousands, Zero Downloads

# GitHub pattern: "awesome-react", "awesome-javascript"
# These repos accumulate stars because people bookmark them
# They're not npm packages at all — just lists of links
# But appear in "trending" alongside actual packages

# Lesson: Stars come from GitHub browsing behavior,
# not from production usage

Ambitious Side Projects — Stars Without Adoption

# Pattern: innovative open source project
# Gets HN front page → 3K stars in a week
# Downloads: ~200/week (developers try it, don't use in production)
# 6 months later: 4K stars, still 200 downloads

# Why downloads don't follow stars:
# - The idea was exciting but the execution wasn't production-ready
# - Better alternatives exist (star was for inspiration, not adoption)
# - Niche use case (impressive but only for specific needs)

# Example pattern: novel state management library with cool demo
# Stars: 8K | Downloads: 1,500/week
# vs Zustand: Stars: 50K | Downloads: 8M/week
# The gap reveals real adoption

Low Stars, High Downloads

semver — 1.2K Stars, 95M Dependents

# The quintessential infrastructure package
# Nobody stars semver. Nobody blogs about semver.
# But every package manager on earth depends on it.

# Why stars ≠ importance:
# - Developers don't star packages they don't notice
# - Infrastructure is invisible when working correctly
# - You "discover" Tailwind. You don't "discover" semver.

# Other packages in this category:
# - glob: ~400 stars, 30M+ dependents
# - minimatch: ~1K stars, 30M+ dependents
# - ms: ~900 stars, 15M+ dependents ("2 days" → ms conversion)
# - bytes: ~600 stars, 5M+ dependents
# - mime-db: ~80 stars, used by literally everything

Transitive Dependencies — Stars Near Zero, Usage Massive

# Packages you've never heard of but are definitely installed:
npm ls --all 2>/dev/null | wc -l  # You have hundreds

# tiny-invariant: 0 GitHub stars listed, ~20M weekly downloads
# Used by React Router, which millions of apps depend on

# @babel/runtime: ~500 stars, 30M+ downloads
# Every Babel-compiled app installs this

# These packages don't need stars.
# Stars are for discovery. These are already discovered — by other packages.

The Interesting Cases: When They Diverge Mid-Life

Moment.js — Stars Accumulated, Downloads Declining

# Moment.js trajectory:
# 2015-2020: Both growing. Stars: 0 → 47K. Downloads: growing fast.
# 2020: Maintainers post "We recommend not using Moment.js anymore"
# 2020-2026: Stars: stagnant at 47K. Downloads: -28% YoY but still 14M/week.

# What this tells us:
# - Stars: frozen in time (nobody new starring a deprecated library)
# - Downloads: slowly declining as teams migrate away
# - The tail: enterprises move slowly, legacy apps persist
# - Why still 14M/week: every app that started with Moment and hasn't migrated

# The star plateau happened when the "star as bookmark" stopped
# The download decline will take 5-10 more years to fully play out

Redux vs Zustand — Similar Stars, Diverging Downloads

# Redux: 60K stars | Zustand: 50K stars (similar)
# Redux downloads: ~8M/week (but ~60% are Redux Toolkit indirect installs)
# Zustand downloads: ~8M/week and growing +25% YoY

# The star comparison: "Redux and Zustand are equally popular"
# The download reality: Zustand is growing; Redux direct usage is flat/declining

# What's happening:
# Redux stars: accumulated over 10 years, from when Redux was mandatory React state
# Zustand stars: accumulated over 4 years, from active enthusiasm today

# Younger stars = more signal. The velocity of star accumulation matters.
# Zustand got 50K stars in 4 years. Redux took 10 years to get 60K.
# Zustand is growing faster in both metrics.

The Right Way to Use Both Metrics

# Stars are useful for:
✅ Initial discovery ("is this a real project?")
✅ Relative standing at a point in time
✅ Community interest level when a project launched

# Stars are NOT useful for:
❌ Evaluating current maintenance status
❌ Comparing packages of different ages
❌ Measuring actual production usage
❌ Deciding if a package is actively growing

# Downloads are useful for:
✅ Actual adoption measurement
✅ Growth trajectory (week-over-week, year-over-year)
✅ Ecosystem health (more downloads = more users finding bugs)
✅ Comparing alternatives at the same time

# Downloads are NOT useful for:
❌ Separating new adoption from legacy usage
❌ Comparing different categories (testing vs state management)
❌ Small packages in specialized niches (100 downloads = the whole niche)

# The best combination:
→ Stars: "Does this have community legitimacy?"
→ Downloads + TREND: "Is this actually being adopted?"
→ Release date + GitHub activity: "Is this maintained?"
→ PkgPulse health score: combines all of the above

# Quick evaluation:
npm view package-name --json | jq '{
  latest: .version,
  modified: .time.modified,
  weeklyDownloads: .["dist-tags"].latest
}'
# Then check npmtrends.com for the trend chart

Package Discovery: The Star Funnel

How developers discover packages (estimated):

1. GitHub Trending / HN / Dev.to post     → Stars spike
2. Tutorial / video / blog post            → Star growth continues
3. Mentioned in popular project (shadcn uses X) → Download spike
4. Included in popular starter template    → Large download jump
5. Appears in "State of JS" survey        → Stars + downloads grow together
6. Becomes the "default choice" in category → Downloads grow, stars plateau

The funnel explains the divergence:
→ Early: stars > downloads (excitement before adoption)
→ Middle: both growing together
→ Mature: downloads > rate of star growth (usage continues, novelty fades)
→ Legacy: stars fixed, downloads slowly declining

Read stars as: "how exciting was this package at its peak?"
Read downloads as: "how many teams are running this today?"
Both matter, but for different questions.

The Velocity Signal: Rate of Star Accumulation

A common mistake in package selection is looking at raw star counts without considering time. A package that accumulated 30,000 stars over eight years signals something very different from a package that accumulated 30,000 stars in eighteen months. The velocity — stars per month — is a better proxy for current community excitement than the cumulative total. GitHub's trending algorithm surfaces packages based on recent star velocity, which is why a new tool can appear on the trending page with far fewer stars than established alternatives.

Developers can calculate approximate star velocity by looking at a package's GitHub star history (tools like star-history.com chart this). A package with a flat star history that peaked in 2020 and has added fewer than 500 stars in the past year is in maintenance or decline mode, regardless of its total star count. A package with an accelerating star history — each month adding more stars than the month before — signals active growth in community interest. This velocity reading requires looking beyond the number on the repository page, which is why raw star counts are such a poor decision-making signal on their own.

Why Download Counts Include False Positives

npm download counts have their own accuracy problems that inflate numbers for heavily-depended-upon packages. Every time a developer runs npm install in a project that directly or transitively depends on a package, that package's download count increments. This means packages like semver, chalk, and minimatch get downloaded not because millions of developers are consciously choosing them, but because they are dependency-tree leaves that get pulled in automatically by tools and frameworks that millions of developers install.

This transitive dependency inflation creates a parallel to the stars problem: just as stars measure enthusiasm without adoption, high download counts can measure transitive inclusion without conscious selection. The most analytically honest npm metric is direct (non-transitive) dependents — packages that explicitly list your package in their dependencies. npm's own website shows this as "Weekly Downloads" combined with the "Dependents" count, and tools like Snyk and Socket.io's dependency analysis show the direct-vs-transitive breakdown. A package with 10M downloads and only 500 direct dependents is a transitive infrastructure package; a package with 10M downloads and 50,000 direct dependents is genuinely chosen by developers.

The "Shooting Star" Anti-Pattern

A specific pattern worth recognizing is the shooting star: a package that gets significant HN or Reddit attention, accumulates thousands of stars in days, sees a brief spike in downloads from developers evaluating it, and then returns to baseline. The downloads spike differently from stars: a shooting star package will show a sharp download spike for one or two weeks around the viral moment, then drop back to a much lower sustained level. The stars remain, but the genuine usage adoption did not follow.

Distinguishing shooting star packages from genuinely adopted ones requires looking at the download trend over a 3-6 month window after the initial spike. Packages that maintain or grow their download count after the spike had genuine utility. Packages that drop back to pre-spike levels within a month had momentary interest but failed the "is this actually useful" test at scale. React Query's trajectory — viral attention followed by sustained growth — exemplifies the genuine adoption pattern. Various novel state management experiments show the shooting star pattern: exciting at launch, flat or declining after the initial burst.

Platform and CI Download Inflation

Another source of download inflation is CI/CD pipelines. A package downloaded by a CI pipeline that runs 500 times per day contributes 3,500 downloads per week — equivalent to 3,500 individual developers, but actually representing a single project. Teams with aggressive CI pipelines (multiple build steps, matrix builds across Node.js versions) can generate surprising download counts from a single active project. npm's download data does not distinguish between human-initiated installs and automated CI installs.

Lock files partially mitigate this: if a project has a package-lock.json committed, npm ci resolves the exact dependency tree without re-resolving versions, which may or may not affect how downloads are counted for unchanged packages. The practical implication is that download counts for popular CI tools (jest, eslint, prettier, ts-jest) are significantly inflated by CI usage. When these tools show 50M weekly downloads, the meaningful signal is that they are used in an enormous number of projects — but the number itself should not be taken as 50M individual developer choices per week.

Using PkgPulse's Combined Health Score

PkgPulse's health score was designed specifically to address the inadequacy of single-metric package evaluation. The score combines download count, download trend (growing vs. declining), release frequency, time since last release, open issue count, issue close rate, TypeScript support, and license type into a single composite signal. Each component addresses a different dimension of package health that neither stars nor downloads captures alone.

The most informative use of the health score is in conjunction with raw metrics: a package with a high score and growing downloads is a strong candidate. A package with a high score but flat downloads might be a mature utility that's simply not flashy enough to generate new downloads despite being well-maintained — consider it stable infrastructure. A package with a declining health score despite high downloads is a warning sign worth investigating: maybe it's accumulating technical debt, its maintainers are stepping back, or it has an emerging security track record worth examining. The combination of dimensions catches failure modes that any single metric misses.

The Maintenance Signal Gap

One dimension that neither stars nor downloads captures well is maintainer activity quality. A package with frequent commits and quick issue responses is healthier than one with sporadic commits and issues sitting open for months, even if their download counts are similar. The GitHub issue close rate — what percentage of issues get closed, and how quickly — is one of the better proxy metrics for maintainer responsiveness, but it requires digging into the repository rather than looking at a single number.

When evaluating packages for long-term use in production systems, the 1-year issue close rate matters more than current downloads. A package that closes 90% of issues within 30 days signals an active, engaged maintainer. A package where issues sit open for 6-12 months without responses signals that the maintainer has deprioritized the project, even if downloads remain high from existing adoption. This is the sustainability signal that stars and downloads both fail to capture, and it requires looking at repository activity data that most package registries surface poorly.

Compare stars, downloads, and health scores side-by-side at PkgPulse.

See also: Which Packages Have the Most Open Issues? and How GitHub Stars Mislead Package Selection, 20 Fastest-Growing npm Packages in 2026 (Data-Backed).

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.