Skip to main content

Which Packages Have the Most Open Issues? 2026

·PkgPulse Team
0

TL;DR

A package with 5,000 open issues can be healthier than one with 50 open issues — if those 5,000 are being triaged and resolved. Open issue count is the most misused metric in package evaluation. Next.js has 2,000+ open issues and is the most well-maintained full-stack framework. A side project with 8 open issues may have had no maintainer activity in 18 months. The signal is issue RESOLUTION RATE and RESPONSE TIME, not raw count.

Key Takeaways

  • Issue count correlates with popularity — more users = more issues filed
  • Resolution rate (issues closed / issues opened per month) is the real signal
  • Next.js: 2,000+ open issues — but 1,500+ closed per month, full-time team
  • P0 triage — how quickly are critical/security issues handled?
  • "Stale" issues — well-run repos auto-close inactive issues; stale = normal

The Issue Count Trap

Comparing issue counts across packages is like comparing
number of support tickets across companies of different sizes.

Amazon has more support tickets than a small SaaS startup.
That doesn't mean Amazon has worse customer service.

Package size and popularity drives issue volume:
→ Next.js: 500M-page projects using it → millions of edge cases
→ Local side project: 5 users, 5 issues

Better questions to ask:
1. What percentage of issues filed this month were closed?
2. How quickly does a maintainer first respond to new issues?
3. Are CRITICAL issues (security, P0 bugs) addressed quickly?
4. Is the ratio of "closed issues" to "open issues" healthy?

High Issue Count, Well-Maintained Packages

Next.js — 2,000-3,000 Open Issues

# Why so many open issues:
# - Used by millions of developers
# - Complex product (bundler, router, rendering, edge runtime, etc.)
# - Teams expect framework-level support quality
# - Feature requests filed as issues (many are aspirational, not bugs)

# Why it's still well-maintained:
# - 1,500+ issues closed per month (larger than open backlog growth)
# - Vercel engineers triaging within 24-48h for critical issues
# - Labels: bug, enhancement, needs-investigation, stale
# - "Stale" bot closes inactive issues after 30 days
# - Security issues: private disclosure → patch in 24-72h

# Signal check:
# Open: 2,400 | Closed: 45,000 | Resolution rate: ~93%
# Average first response: < 48h
# Critical bugs fixed: within 1 week
# Assessment: Excellent maintenance despite large issue count

TypeScript — 5,000+ Open Issues

# TypeScript has one of the largest open issue backlogs in npm-adjacent packages
# Why:
# - Every TypeScript user is a potential bug reporter
# - Type inference edge cases are infinite
# - Feature requests accumulate (breaking changes require RFCs)
# - Microsoft teams move deliberately, not at startup speed

# But:
# - Critical bugs (type system correctness): patched in days
# - Response to confirmed bugs: typically 1-2 weeks
# - Feature requests: months to years (by design)
# - TypeScript team quality: exceptional
# Assessment: High issue count reflects ambition + scale, not poor maintenance

Low Issue Count, Problematic Packages

# A package with 5 open issues might have:
# ❌ Only 5 users (nobody filing issues)
# ❌ Maintainer who closes issues without fixing them
# ❌ No issue tracker (issues disabled or maintainer uses email)
# ❌ Auto-closing bot set too aggressively

# Signs of a problematic low-issue package:
# - Last issue opened was 8 months ago → nobody using it
# - Issue closed with "works for me" with no further investigation
# - No issue templates → high friction to file bugs
# - README says "please use email for issues"

# These are worse than packages with 2,000 active issues
# because at least active issue trackers show engagement

How to Evaluate Issue Tracker Quality

# GitHub issue tracker signals:

# ✅ Good signs:
# - Issues have labels: bug, enhancement, good-first-issue, stale
# - Maintainer responds within days on recent issues
# - Linked PRs on bug issues (fixes are coming)
# - Security issues closed quickly (or via private disclosure)
# - Regular "here's what we're working on" updates

# ⚠️ Concerning signs:
# - Newest issues have no response for 30+ days
# - P0 bugs sitting open for months
# - PRs opened and never reviewed
# - Many issues labeled "stale" with no resolution
# - "This project is looking for maintainers"

# ❌ Red flags:
# - Issues locked/disabled entirely
# - Critical security issues open for 6+ months
# - Repository archived with no migration path
# - Maintainer comments: "I don't use this anymore"

# Automation tip: GitHub's "is:open is:issue" + sort by "oldest"
# Shows the longest-sitting unresolved issues
# This is your best signal for what the maintainer won't address

Monthly issue metrics (approximate):

                    Open    Closed   Resolution %   P0 Patch Time
Next.js:            2,400   1,500+     ~87%          1-7 days
React:              1,100     800      ~75%          3-14 days
Vue:                  800     600      ~75%          1-7 days
Vite:                 400     450      ~90%          1-3 days
Tailwind CSS:         600     500      ~85%          1-7 days
Fastify:              250     200      ~80%          1-3 days
Zustand:              120     130      ~90%          1-5 days
Prisma:               800     700      ~87%          3-7 days
Drizzle:              300     280      ~85%          3-7 days
Express:              500     100      ~40%   ← maintenance mode: lower activity
Create React App:      80       5      ~5%   ← deprecated, not resolving

The P0 Issue Test

# The most important signal: how are CRITICAL issues handled?

# What to search for on GitHub:
# is:open is:issue label:bug label:critical
# is:open is:issue label:P0
# is:open is:issue label:security

# or:
# is:closed is:issue label:security
# → Shows historical security response time

# The test:
# Find the most recent security issue filed
# Check: how long did it take to get a response? A fix? A release?

# If a package has:
# → Security issues patched in < 7 days: excellent
# → Security issues patched in 7-30 days: acceptable
# → Security issues open for 60+ days: concerning
# → Security issues open for 6+ months with no response: avoid

# Real example pattern:
# Popular auth package had CVE filed publicly (no private disclosure)
# Issue opened: Jan 1
# Maintainer first response: Jan 8 ("investigating")
# Fix PR: Jan 20
# Release with fix: Jan 22
# Total: 22 days — acceptable for the complexity of the fix

Using Issue Trackers in Your Dependency Evaluation

# Before adding a new dependency:
# 1. Open: github.com/org/repo/issues
# 2. Sort by: "Newest"
#    → Do recent issues have responses? How fast?
# 3. Search: "security"
#    → Are security issues closed? How quickly?
# 4. Check: "is:open sort:created-asc" (oldest open issues)
#    → What old issues haven't been addressed?
#    → If they're bugs that affect your use case: red flag
# 5. Search: "is:open label:bug"
#    → How many open bugs? Are they minor or major?

# Quick health check via GitHub API:
curl -s "https://api.github.com/repos/pmndrs/zustand" | jq '{
  open_issues: .open_issues_count,
  forks: .forks_count,
  last_push: .pushed_at
}'

# npm metadata also includes repository link:
npm view zustand --json | jq '.repository.url'
# Then navigate to GitHub and check the issues

Issue Quality vs Quantity

The type of open issues matters as much as the count:

Low-quality issues (don't indicate maintainer problems):
- Duplicate reports (users didn't search first)
- Support questions better suited for Stack Overflow
- Feature requests that don't align with project scope
- Issues from very old versions
- "Issues" that are actually user errors

High-quality signals (watch these):
- Reproducible bugs with no maintainer response
- Security vulnerabilities with no acknowledgment
- Regression issues ("this worked in version X")
- Issues with many upvotes (community agrees it's a real problem)
- Issues referencing production outages

The best-maintained packages have:
→ Issue templates that filter out noise
→ Auto-labeling for triaging
→ Clear scope statements ("we don't support X")
→ Friendly but firm closure of off-topic issues
→ Fast triage of legitimate bugs

A package closing issues is NOT a bad sign.
A package closing issues WITHOUT addressing the underlying bug IS.

The Issue Velocity Metric: Opened vs Closed Rate

If you take one thing away from evaluating issue trackers, make it issue velocity: the ratio of issues closed to issues opened per month. This single metric cuts through the noise of absolute counts and tells you whether a project is improving or degrading over time.

A package with 500 open issues and a 2:1 close-to-open ratio — closing twice as many issues as are being opened each month — is actively shrinking its backlog. That's a healthy maintenance signal regardless of the headline count. A package with 50 open issues and a 1:3 close-to-open ratio — where three new issues appear for every one resolved — is accumulating debt fast, and the absolute count will look alarming within a few months.

Calculating this doesn't require any API access. GitHub's issue filter UI does the work. To count issues opened in the last 90 days:

is:issue created:>2026-01-01

Then switch to closed issues in the same window:

is:issue closed:>2026-01-01

Divide closed by opened to get the velocity ratio. Most package dashboards don't surface this directly — you need to calculate it manually from the filters, or use a tool like isitmaintained.com which does the math for you.

PkgPulse's health score maintenance component incorporates issue velocity alongside raw open count and first-response time. A package can have a high open issue count and still score well on maintenance if its velocity is strong.

The practical signal: any package where issues are accumulating faster than they're being resolved is showing a scaling problem, regardless of absolute count. If that package is on your critical path — authentication, data access, payments — it warrants a closer look at alternatives before it becomes a production problem.


When High Issue Counts Reflect Good Community, Not Bad Code

The open-source projects with the most issues are often the most successful: React, Vite, webpack, Next.js. They have thousands of open issues because thousands of developers use them, hit edge cases, and report them. High issue count and high quality are not mutually exclusive — in fact, the correlation often runs the other way.

The distinction lies in what the issue tracker looks like on the inside. High-quality signals that indicate active community rather than poor quality:

  • Labels are in use: "bug", "enhancement", "documentation", "needs-triage", "good-first-issue" — labeling requires maintainer engagement, and labeled issues are being managed
  • Issues link to related issues or PRs: maintainers cross-referencing work indicates awareness of the issue landscape
  • Maintainer responses explain timeline or priority: even a "we're aware, targeting v2.1" response shows active tracking
  • Clear "wontfix" or "by design" closures with reasoning: a maintainer saying "this is out of scope because X" is a sign of a well-defined project, not negligence
  • Milestones grouping related issues: roadmap visibility through milestones means issues are being organized toward resolution

Low-quality signals that indicate actual problems:

  • Issues that sit for months without any maintainer acknowledgment — not labeled, not responded to, not closed
  • Duplicate issues that aren't merged or cross-referenced (the maintainer has no awareness of existing reports)
  • The last maintainer comment in the repository is from a year ago, even if there have been recent commits from external contributors
  • Security issues open longer than 60 days with no response

A package with 50 issues and no maintainer response in 18 months is more concerning than a package with 500 issues and weekly maintainer activity. When evaluating a new dependency, always look at the most recent 10 issues and check whether a maintainer has responded — that 60-second check tells you more than the headline number ever will.


Bot-Closed Issues as a Proxy for Maintainer Avoidance

Automated issue management — bots that label issues as stale and close them after a period of inactivity — has become standard practice in large open-source repositories. Most developers understand that a "stale" label does not indicate a package problem. But there is a meaningful difference between a stale bot that closes low-activity issues and a stale bot that is being used as a substitute for actual triage.

The diagnostic: look at the issues a stale bot is closing and check whether they contain maintainer responses. A stale bot closing issues that have zero maintainer engagement — no label, no question, no acknowledgment — is functionally a garbage collector for inconvenient reports. The issue was filed, nobody looked at it, it aged out. That pattern, scaled across an issue tracker, means the project is receiving more feedback than its maintainers are willing to process. The stale bot is managing the symptom rather than the underlying shortage of maintainer time.

Compare this to the pattern in well-maintained projects: the stale bot closes issues that a maintainer has already triaged as "unable to reproduce" or "needs more information from the reporter." The bot simply provides the mechanical follow-through for issues where the human judgment has already been applied. This is a legitimate use of automation that keeps the tracker clean without hiding active bugs.

The most useful stale-bot signal is the rate of "stale-and-reopened" issues. When a bug is closed as stale and then reopened by a different user who reports hitting the same problem, that is strong evidence that the original closure was premature. A project with a high rate of reopened stale issues has a systematic problem: real bugs are being mislabeled as stale because nobody investigated them. GitHub doesn't surface this metric natively, but searching label:stale is:open on a repository shows issues that were labeled stale but not closed — a secondary signal for active but unresolved reports that the stale system is struggling to process.

The PR-to-Merge Ratio as a More Honest Signal

Open issue counts tell you something about the demand side of a project — how many people are filing reports. Pull request merge rates tell you something about the supply side — how much maintainer capacity exists to review, test, and ship contributions from the community.

A project where contributors regularly open PRs that then sit unreviewed for months has a maintainer capacity problem that the issue tracker may not fully reveal. The issue tracker might look acceptable — maintainers respond to questions, triage bug reports — but the actual work of shipping improvements is stalled because nobody has time to do the code review. This pattern is common in single-maintainer projects where the maintainer is active on discussion but has limited time for review and release cycles.

The calculation is straightforward from GitHub's PR tab: filter for closed PRs and compare the number of merged versus closed-without-merging. A merge rate below 40% is a signal worth investigating — it may reflect high standards (many PRs genuinely don't meet quality bars), limited maintainer time, or both. Context matters: a low merge rate with substantive review comments is healthy; a low merge rate with "closing due to inactivity" responses on PRs that included working code is not.

Time-to-merge on accepted PRs is the complementary metric. A PR that takes three months to merge may be fine if the delay was due to release cycle planning or API discussion — that's a well-managed project. A PR that takes three months to merge because nobody reviewed it until a user complained in a separate issue is a capacity signal. For projects you depend on for production-critical functionality, a quick survey of recent merged PRs — checking the "opened at" versus "merged at" dates — gives a concrete sense of the team's operational tempo.

This metric matters practically because the PR merge rate predicts how the project will respond when you hit a bug and want to contribute a fix. A project with a 60% merge rate and a two-week median merge time will absorb your patch. A project with a 25% merge rate and months of PR queue will leave your fix on the floor, and you will need to either fork the package or find a workaround.

Understanding Active Community vs Neglect in High-Issue Repos

The surface distinction between "1,000 open issues because popular" and "1,000 open issues because broken" is easy to state but harder to verify from the outside. There are specific patterns to look for that distinguish the two.

Active community projects with high issue counts tend to have a diverse range of issue types: feature requests, documentation improvements, questions about edge cases, bug reports with reproduction cases, and discussion threads about architectural decisions. The label diversity reflects that maintainers are processing issues into categories. An issue tracker dominated entirely by unlabeled bug reports, with no feature request management or documentation issues, suggests a project that is receiving complaints faster than it can address them.

The recency distribution of maintainer responses is the sharpest signal. Open a repository's issue list sorted by newest and scan the first twenty or thirty issues. If maintainers have responded to most of them within a week — even if the response is "investigating," "can you provide a reproduction?" or "this is by design because X" — the project is actively engaged. If the most recent issues have no maintainer response and the last maintainer comment anywhere in the list is from two months ago, the project has shifted into maintenance mode at best.

The nature of the questions community members ask in issues also reveals project health. When a community is healthy around a package, experienced users answer newcomer questions in issues before maintainers need to engage. This happens organically in Next.js, React, and Vite issues — community members resolve support questions, leaving maintainers to focus on confirmed bugs and roadmap decisions. When no community-based response happens, either the user base is small (so few people know the answer) or the community is not cohesive enough to self-organize around support. Both scenarios limit the project's ability to scale.

Comparing the issue tracker to the discussions tab, where GitHub repositories have enabled GitHub Discussions, also provides signal. Projects that have successfully migrated support questions and feature conversations to Discussions tend to have cleaner issue trackers — issues are more likely to be confirmed bugs or active work items. The issue tracker as a support forum is a symptom of a project that hasn't invested in community infrastructure; the issue tracker as a bug database is a sign of one that has.

Single-Maintainer Packages and the Issue Tracker Warning Sign

Single-maintainer packages have a specific issue tracker failure mode that is worth calling out separately from general maintenance quality discussion. The pattern: a solo maintainer who is highly engaged for the first two to three years of a project's life, then reduces engagement as the project stabilizes or as their personal circumstances change, while the issue count continues to grow from an expanding user base.

This pattern is extremely common in the npm ecosystem, where a large fraction of packages were started and are maintained by a single person who wrote the code to solve their own problem. The issue tracker in these packages often shows a visible inflection point — a date after which response times increased, bot-closed issues accumulated, and PRs started sitting unreviewed. This inflection point often corresponds to a job change, a new project the maintainer finds more interesting, burnout, or simply the natural arc of enthusiasm for a project that has "shipped."

The concerning state is not an unmaintained package — that's visible from the lack of any recent activity. The concerning state is a nominally maintained package where the maintainer still occasionally engages but has insufficient bandwidth to keep up with the issue volume. These packages have enough activity to appear maintained on surface metrics (last commit: 3 months ago) but not enough to address the accumulated backlog of bug reports and PRs. Issues opened six months ago with clear reproductions and no maintainer response are the specific signal to look for.

For packages in this state, the question is how critical the package is to your stack. A single-maintainer package in this condition that handles a non-critical function — date formatting, a color utility, a CLI helper — carries low risk: you can pin the version, accept the current behavior, and look for a maintained alternative on your own timeline. A single-maintainer package in this condition that handles authentication, file system operations, or network requests is a different risk profile and warrants more urgent evaluation of alternatives or a fork strategy.

What Issue Data Tells You Before Committing to a Dependency

The issue tracker is most valuable not as a historical record but as a pre-adoption screening tool. The fifteen minutes spent evaluating an issue tracker before adding a dependency can save hours of debugging time after discovering a known bug that the maintainers are aware of but haven't shipped a fix for.

The specific workflow that surfaces the most useful information: start with is:open is:issue sort:reactions-desc. This shows the open issues with the most upvotes — the problems that a large number of users have confirmed they are hitting. If any of the top-reacted open issues describe a failure mode that your intended use case would trigger, you have a concrete data point about adopting the package in its current state. Sorting by reactions is more useful than sorting by comment count because comments include maintainer and community discussion; reactions represent distinct users confirming the same problem.

Then check is:open is:issue label:bug sort:created-asc — the oldest unresolved confirmed bugs. A bug filed two years ago that has been confirmed and reproducible but not fixed tells you something about the maintainer's bandwidth or prioritization. Depending on whether that bug is in your critical path, it may be decisive information.

For any package that will handle authentication, session management, cryptographic operations, or direct user input, also check is:closed is:issue label:security to understand the historical pattern of security issue response times. A package that took 90 days to patch a reported XSS vulnerability is a different risk profile than one that shipped a fix in five days. Historical response patterns predict future response patterns better than any stated policy.

The ten-minute pre-adoption review outlined above won't catch every problem, but it will surface known issues that automated tools like npm audit won't show — bugs that affect production behavior, performance regressions, and integration problems specific to the ecosystem you're working in. Package health scores on PkgPulse incorporate issue data to provide a compressed version of this signal, but there is no substitute for reading the actual reports when the decision matters.


Compare issue tracker activity and health scores for npm packages at PkgPulse.

See also: Lit vs Svelte and The Average Lifespan of an npm Package, How GitHub Stars Mislead Package Selection.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.