How AI Is Changing How Developers Choose npm Packages
TL;DR
AI coding assistants are the new Stack Overflow for package recommendations — and they have biases. In 2026, developers increasingly ask Claude, ChatGPT, or GitHub Copilot "what package should I use for X" instead of searching the web. These models have knowledge cutoffs and training biases that make them recommend established packages (React, Express, Axios) even when newer alternatives are objectively better for new projects. Understanding how AI picks packages helps you know when to trust the recommendation and when to verify it.
Key Takeaways
- 50%+ of developers use AI coding assistants as their primary package discovery tool (JetBrains survey 2025)
- Knowledge cutoff bias — models recommend packages popular at training time, not today
- Recency gap — packages that grew 5x since model training are systematically underrecommended
- The "safe default" effect — AI prefers well-documented, widely-used packages even when smaller alternatives are better
- Verification still matters — AI recommendations should be checked on npmjs.com or PkgPulse
How AI Recommends Packages
When you ask an AI "what's the best state management library for React?", it's doing pattern matching on training data:
Developer asks: "What state management should I use for React in 2026?"
AI considers:
- How many times was this library mentioned in training data?
- Was it in positive contexts (tutorials, docs) or negative (bug reports)?
- When was the training data cut off?
Result: AI recommends Redux Toolkit (highly documented, widely discussed)
over Zustand (grew 400% after many models' training cutoffs)
Reality in 2026:
- Zustand: ~4M weekly downloads
- Redux Toolkit: ~4M weekly downloads
- For new projects: ~80% choose Zustand/Jotai over Redux Toolkit
The Knowledge Cutoff Problem
Scenario: Developer in March 2026 asks AI about HTTP clients
Model trained through: Q1 2024
Downloads at training time: Downloads in March 2026:
- axios: ~50M/week - axios: ~55M/week
- ky: ~2M/week - ky: ~5M/week
- node-fetch: ~30M/week - native fetch: built-in
AI recommendation: axios (correct for its training data)
Reality: native fetch (Node.js 18+) is often the right answer in 2026
ky is a strong recommendation for fetch-compatible API + browser
Packages that grew significantly after common training cutoffs:
Library Training-era DL 2026 DL Change
Drizzle ORM ~200K/week ~2M/week 10x
Biome ~200K/week ~2M/week 10x
Hono ~300K/week ~1.5M/week 5x
Bun N/A (runtime) ~1M users New
React Email ~200K/week ~800K/week 4x
t3-env ~100K/week ~400K/week 4x
These are systematically underrecommended by AI.
What AI Gets Right vs Wrong
What AI Recommends Well
✅ Established libraries with years of documentation:
- React, Vue, Angular (frameworks)
- Express, Fastify (servers — though misses Hono)
- Jest, Testing Library (testing — though misses Vitest momentum)
- Prisma (ORM — though misses Drizzle's growth)
- TypeScript (general tooling)
✅ Widely-taught comparisons:
- "Use Zod for validation" — well documented, in training data
- "Use TanStack Query for data fetching" — common blog posts
- "Use Tailwind for styling" — massive online presence
✅ Package-specific implementation help:
AI excels at "how do I use X" once X is chosen
What AI Gets Wrong
❌ Package momentum (which is gaining, which is declining)
- Doesn't know Express adoption dropped for new projects
- Doesn't know TypeORM is in maintenance mode vs Drizzle growth
❌ Recent version changes
- Next.js 15 features (after training cutoff)
- React 19 changes (after training cutoff)
- Breaking changes in major version bumps
❌ Community sentiment shifts
- "Create React App is deprecated" — many models still recommend it
- "Don't use Moment.js" — many models still suggest it
❌ Newer, high-quality alternatives
- Biome over ESLint+Prettier (newer, less training data)
- Drizzle over TypeORM (rapid growth, newer)
- Hono over Express for edge (category barely existed at training)
The AI-Recommended Package Ecosystem Effect
AI recommendations are now significant enough to affect package download trends:
Observed pattern (2025-2026):
1. Developer asks AI → AI recommends Package X
2. Developer installs Package X
3. npm downloads for Package X increase
4. Future AI training data includes more mentions of Package X
5. AI recommends Package X even more
This creates winner-take-more dynamics:
- Well-documented packages with lots of tutorials get a sustained boost
- Newer packages without training data get a "discovery gap"
Practical implication for new packages (open source developers):
- Write comprehensive, AI-indexable documentation
- Publish comparison posts ("Why X vs Y")
- Submit to context windows (State of JS survey, awesome lists)
- The documentation is now also training data
How to Use AI Package Recommendations Well
Step 1: Get the Recommendation
User: "What should I use for form validation in a React TypeScript app in 2026?"
AI: "React Hook Form with Zod resolver. RHF handles form state, Zod handles schema validation..."
Step 2: Verify It's Current
# Quick verification workflow:
# 1. Check npm trends: npmtrends.com/react-hook-form
# 2. Check PkgPulse health score: pkgpulse.com/compare/react-hook-form-vs-formik
# 3. Check GitHub: last commit, open issues, Stars
# 4. Check package.json: "types" field? (TypeScript-first?)
# For react-hook-form specifically:
# ✅ High health score
# ✅ Recent releases
# ✅ Growing downloads
# ✅ TypeScript-first
# → AI recommendation is good
Step 3: Check Alternatives
"What are the alternatives to [AI's recommendation]?"
This forces the AI to surface options it might not have led with.
Often reveals newer alternatives the AI knows about but didn't prioritize.
Example:
Q: "What alternatives exist to Prisma for TypeScript database access?"
A: "Drizzle ORM is a newer alternative that's gained significant adoption..."
[Now you know about Drizzle even though AI led with Prisma]
Step 4: Validate Against Current Data
Use a live data source — npm trends, PkgPulse, or GitHub trending — to confirm:
AI said: "Prisma is the most popular TypeScript ORM"
Verification (March 2026):
Prisma: 5M weekly downloads
Drizzle: 2M weekly downloads (growing 400% YoY)
TypeORM: 3M (declining)
Conclusion: Prisma is accurate for "most popular by downloads"
but Drizzle is the momentum play for new projects
What This Means for Package Authors
If you maintain or are building an npm package:
- Documentation is AI training data — comprehensive docs = better AI recommendations
- Write comparison content — "X vs Y" posts are highly searchable AND trainable
- Be in the right lists — awesome lists, State of JS, top GitHub trending = training signal
- Use common patterns — packages that follow conventions get pattern-matched more often
Check real-time package health and download trends on PkgPulse — the data AI can't always access.
See the live comparison
View react vs. vue on PkgPulse →