Skip to main content

Why Bundle Size Matters More Than Your Framework 2026

·PkgPulse Team
0

TL;DR

The framework debate is mostly irrelevant to performance if you're shipping a 2MB bundle. React adds ~45KB. Svelte adds ~2KB. But the apps people are actually building have 300-500KB of third-party libraries, 200-400KB of application code, and sometimes 500KB+ from a single chart library they used on one route. Switching from React to Svelte would save ~43KB. Lazy-loading that chart component saves 500KB on first load. The math is obvious — yet developers spend weeks debating frameworks and minutes on bundle analysis.

Key Takeaways

  • React is ~45KB — tiny relative to your application code and libraries
  • A typical SaaS app ships 500KB-2MB of JavaScript to first-time users
  • 100KB more JS = ~1 second slower on average mobile (3G, ARM processor)
  • Code splitting + lazy loading has 10x more impact than framework choice
  • The biggest wins: remove duplicate dependencies, lazy-load heavy routes

The Real Bundle Size Breakdown

What's actually in a typical React SaaS app bundle?

Before optimization (typical):
→ React + ReactDOM:          45KB
→ React Router:              24KB
→ TanStack Query:            13KB
→ date-fns (not tree-shaken): 81KB
→ Chart.js:                  62KB  (used on /dashboard route only)
→ framer-motion:             47KB  (used for 3 animations)
→ Your application code:    250KB
→ Duplicate lodash copies:   70KB
→ i18n library + translations: 120KB (all locales)
→ Total:                    712KB

After optimization:
→ React + ReactDOM:          45KB  (can't avoid)
→ React Router:              24KB
→ TanStack Query:            13KB
→ date-fns (tree-shaken):     3KB  (only functions used)
→ Chart.js:                   0KB  (lazy loaded on /dashboard)
→ framer-motion (basic):     20KB  (tree-shaken to what's used)
→ Your application code:    200KB  (code split by route)
→ Duplicate lodash:           0KB  (deduped)
→ i18n:                      30KB  (only current locale, lazy rest)
→ Total first load:          335KB

Savings: ~377KB on first load = ~3 seconds on mobile 3G

Framework switch React → Svelte would have saved: 43KB
Code splitting saved: 377KB

The math is obvious.

Why This Matters for Real Users

The average user isn't your developer laptop on fiber:

Device stats (global internet traffic, 2025):
→ 60% of web traffic: mobile devices
→ Average mobile processor: 4-8x slower than a MacBook
→ Average mobile connection: LTE (fast) to 3G (slow)
→ Median global download speed: 12Mbps (vs US median 50Mbps)

JavaScript cost:
1. Download: obvious (bandwidth)
2. Parse: convert text → AST (CPU intensive)
3. Compile: JIT compile to machine code (CPU intensive)
4. Execute: run the JavaScript (CPU intensive)

The parse + compile + execute cost is often ignored.
On an average Android device:
→ Downloading 1MB of JS: ~1 second (LTE)
→ Parsing + compiling 1MB of JS: ~5 seconds (low-end CPU)

So a 1MB bundle costs ~6 seconds before your app does anything.

Real user data (from web performance research):
→ 53% of mobile users abandon a page that takes >3 seconds to load
→ 100ms delay = 1% conversion rate decrease
→ 1 second delay = 7% conversion rate decrease

You're losing real customers to that chart library you imported globally.

The Bundle Analysis Workflow

# 1. Measure your current bundle (do this first, always)

# Next.js:
npm install --save-dev @next/bundle-analyzer
# next.config.ts:
import bundleAnalyzer from '@next/bundle-analyzer';
const withBundleAnalyzer = bundleAnalyzer({ enabled: process.env.ANALYZE === 'true' });
export default withBundleAnalyzer({});
# Run: ANALYZE=true npm run build
# Opens interactive treemap of your bundle

# Vite:
npm install --save-dev rollup-plugin-visualizer
# vite.config.ts:
import { visualizer } from 'rollup-plugin-visualizer';
// Add to plugins: visualizer({ open: true })
# Run: npm run build
# Opens stats.html

# 2. Look for the big blocks in the visualization
# Common offenders:
# → date libraries (moment: 300KB, date-fns: 81KB before tree-shaking)
# → chart libraries (echarts: 900KB!, Chart.js: 62KB, Recharts: 52KB)
# → rich text editors (CKEditor: 500KB+, Slate: 100KB+)
# → i18n library with ALL locales included
# → Duplicate packages (react appears twice in different versions)
# → Development-only code in production bundle

# 3. Find which routes use which code
npm run build 2>&1 | grep "First Load JS"  # Next.js
# Typical output:
# ┌ ○ /                  54.3 kB
# ├ ○ /dashboard        412.7 kB  ← This is your problem
# ├ ○ /settings          48.1 kB
# └ ○ /profile           52.3 kB

# /dashboard is 8x larger because of Chart.js being loaded eagerly

The Fixes with the Highest ROI

// Fix 1: Dynamic imports for heavy components
// BEFORE: chart loads on every page
import { BarChart } from './BarChart';

// AFTER: chart only loads when user navigates to dashboard
const BarChart = dynamic(() => import('./BarChart'), {
  loading: () => <Skeleton />,  // Next.js
});
// Or in Vite:
const BarChart = lazy(() => import('./BarChart'));
// Saves: whatever BarChart's dependencies cost (often 50-200KB)

// Fix 2: Tree-shake date libraries
// BEFORE (moment - not tree-shakeable):
import moment from 'moment';
const formatted = moment().format('MMM D, YYYY');
// Costs: 300KB

// AFTER (date-fns - tree-shakeable):
import { format } from 'date-fns';
const formatted = format(new Date(), 'MMM d, yyyy');
// Costs: ~3KB (just the format function)

// Fix 3: Load translations lazily
// BEFORE: all locales upfront
import i18n from 'i18next';
import enTranslations from './locales/en.json';
import esTranslations from './locales/es.json';
import frTranslations from './locales/fr.json';
// Costs: all locale files on first load

// AFTER: only current locale
i18n.init({
  backend: {
    loadPath: '/locales/{{lng}}/{{ns}}.json',
  },
});
// Fetches only the user's locale, lazily

// Fix 4: Split by route automatically
// Next.js does this automatically per page
// Vite with React Router:
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));
// Each route's code only loads when that route is visited

// Fix 5: Find duplicate packages
npx npm-dedupe  # Or: pnpm dedupe
# Duplicate packages happen when two deps need different versions
# Often: two versions of lodash, two versions of react-is
# Cost: doubled bundle size for affected packages

The Framework Size Question (For Context)

Since we're being quantitative:

Framework gzipped sizes (just the framework):
Svelte:   ~2KB runtime
Solid:    ~7KB
Preact:   ~4KB
Vue 3:    ~33KB
React:    ~45KB
Angular:  ~150KB

Reality check:
→ Your app code: 200-500KB
→ Your third-party libraries: 50-300KB
→ Framework: 2-150KB

Switching from React to Svelte:
→ Saves 43KB
→ That's less than one average third-party library
→ Less than one poorly-optimized route's lazy loading savings

This doesn't mean framework size is irrelevant:
→ Angular's 150KB is real and matters
→ Svelte's 2KB is genuinely impressive
→ For small sites and edge cases, these numbers matter

But the marginal difference between React (45KB) and Svelte (2KB)
is 43KB — the equivalent of one Axios import.
For most apps, this is not the bottleneck.

The frameworks debate is worth having.
Just don't confuse it with performance engineering.
They're different conversations.

The Practical Bundle Optimization Workflow

Most projects don't have a bundle size problem — they have a bundle visibility problem. The size is there, but no one has looked at it. The fix starts with making the bundle legible.

Step one is installing a visualizer and actually running it. For Vite projects, rollup-plugin-visualizer generates a stats.html treemap after each build. For Next.js, @next/bundle-analyzer produces an equivalent output when you run ANALYZE=true npm run build. Both open an interactive visualization where each block represents a module, sized proportionally to its contribution to the bundle. This takes under five minutes to set up and is usually the most productive five minutes in a performance sprint.

Step two: identify the top three contributors by size. In practice, they're almost always one of three things. A UI component library imported as a single namespace instead of being tree-shaken (the fix: named imports only, or switch to a library with better tree-shaking). A utility library pulled in for a single function — the entire library is in the bundle, but only one function is called (the fix: import the function directly, or inline the logic). Or a library that appears to be tree-shaken but isn't, because it uses barrel exports that defeat static analysis (the fix: import from the specific subpath, e.g. import debounce from 'lodash/debounce').

Step three: for each heavy package, check bundlephobia.com to see the gzipped size, whether tree-shaking is supported, and what smaller alternatives exist. Bundlephobia also shows the download time at various connection speeds, which makes the user impact concrete rather than abstract.

Step four: verify that route-based code splitting is working. Both Next.js and Vite handle this automatically at page boundaries, but check that large components are not being imported at the layout level instead of the page level — a single mis-placed import can pull a heavy component into every route.

Step five: measure before and after with Lighthouse for lab data and your RUM provider for field data. Lab data is fast to get; field data is what actually matters. The typical outcome of one focused sprint is a 20–40% reduction in initial JS.

The Code Splitting Patterns That Actually Work

Route-based code splitting is the highest-leverage optimization in the frontend performance toolkit, and in most modern frameworks it requires no manual work at all. Next.js and Vite both split the bundle at route boundaries automatically — each page only ships the JavaScript it needs. If you haven't verified this is working in your project, that's the first thing to check. A route that's loading 400KB of JavaScript when it should be loading 60KB is almost always importing a heavy component at the wrong level.

Component-based code splitting with React.lazy() and Suspense is the next layer. The heuristic is simple: any component that isn't visible on initial page load is a candidate for lazy loading. Modal dialogs, complex data tables, rich text editors, chart components, anything below the fold — none of these need to be in the initial bundle. A modal that includes a rich text editor might be 30–40KB of JavaScript that loads on every page visit, even though the modal is opened by less than 10% of users.

In Next.js, the pattern is const RichEditor = dynamic(() => import('./RichEditor'), { ssr: false }). In Vite with React, it's const RichEditor = lazy(() => import('./RichEditor')) wrapped in a Suspense boundary with a fallback. The mechanics are slightly different; the principle is identical.

The common mistake with lazy loading is applying it too aggressively to routes that users navigate to immediately. If a user types something in a search box and expects results within 200ms, the search results page should be prefetched, not lazily loaded on click. Next.js handles this automatically via the prefetch prop on <Link> components — adjacent routes are prefetched in the background as the current page loads, so navigation feels instant even though the JS was loaded lazily.

Done correctly, the result is an application where the initial route loads only what it needs, adjacent routes arrive silently in the background, and heavy components are fetched on demand. Each individual optimization is modest. Together they compound into a meaningfully faster application.

How JavaScript Parse and Eval Time Destroys Mobile Performance

Download speed gets most of the attention in bundle size discussions, but it's the CPU cost of JavaScript that creates the worst user experience on mobile devices. When a browser receives JavaScript, it must parse the text into an abstract syntax tree, compile that AST to bytecode, and then execute it. On a modern laptop or desktop, this cost is negligible — processors are fast enough to complete the cycle in milliseconds even for large bundles. On a mid-range Android phone with an ARM processor running at 1.5GHz, the same process can take four to eight times longer. A 500KB bundle that compiles in 80ms on a MacBook Pro may take 400–600ms on the median mobile device globally. Scale that to a 1.5MB bundle and you're looking at over a second of CPU-blocked time before the app can respond to user input.

The distinction matters because download time scales with connection speed, but parse and compile time scales with CPU performance. You cannot assume that a user on a fast LTE connection will have a fast processor. Mobile networks have improved significantly over the last decade; the device upgrade cycle is considerably slower. A meaningful portion of your users are on phones that are two to four years old, bought as mid-range or budget devices at the time. The Lighthouse performance score reflects this: its simulated throttling uses a 4x CPU slowdown to model the median mobile device, not a worst-case scenario.

The practical consequence is that a 300KB reduction in JavaScript achieves two things simultaneously — it reduces download time and it reduces the parse/compile/execute burden. Lazy loading a chart component that's 150KB doesn't just save 150KB of download. It removes that 150KB from the CPU's critical path on initial load entirely. The chart code still parses and compiles when the user navigates to the route that needs it, but by that point the application is already interactive, and the user experience of a slight delay navigating to a dashboard is dramatically less damaging than a multi-second blank screen on initial load.

The V8 JavaScript engine (used in Chrome and Node.js) has improved dramatically in parsing speed over the past five years, and it caches compiled bytecode to disk after the first parse. On repeat visits, the compilation cost is avoided for unchanged scripts. This is a meaningful improvement for returning users but irrelevant for first-time visitors — the population most likely to form or abandon an impression of your product. First-load performance is cold-cache performance, and optimizing for it requires keeping initial JavaScript small enough that even slow CPUs on bad connections can complete the parse/compile/execute cycle in under two seconds.

Tree-Shaking Reliability: Why "Supports Tree-Shaking" Isn't Binary

Tree-shaking — the process of eliminating dead code from a bundle at build time — is often treated as a feature a library either has or doesn't. The reality is considerably more nuanced, and misunderstanding it leads to developers believing they've eliminated a library's cost when they've actually imported most of it.

The precondition for reliable tree-shaking is ES module syntax with static imports. Bundlers like Rollup, Vite, and webpack 5 can statically analyze which exports are actually consumed and exclude the rest. But this analysis breaks down in several common situations. The most frequent is side effects: if a module declares that it has side effects (or if the bundler can't determine that it doesn't), the bundler will include the entire module to avoid changing runtime behavior. Many libraries include a "sideEffects": false field in their package.json to signal to bundlers that tree-shaking is safe, but libraries that don't include this field may be included in full even when only one function is used.

Barrel files are a second common tree-shaking failure mode. A barrel file re-exports everything from a set of modules: export * from './utils', export * from './helpers', export * from './formatters'. When a consumer imports one function from a barrel file, the bundler must evaluate the entire barrel to determine what's exported, and if any module in the chain has side effects or CommonJS syntax, the entire barrel may be included. This is why import { format } from 'date-fns' gives dramatically different bundle output than import { format } from 'date-fns/format' — the latter imports directly from the specific module, bypassing the barrel entirely.

CommonJS modules (require() / module.exports) are structurally incompatible with static tree-shaking because they can compute export names dynamically at runtime. Bundlers can perform a limited form of tree-shaking on CJS through heuristics, but it's much less reliable than with ESM. Libraries that ship only a CJS build — still common for older packages that predate widespread ESM tooling — cannot be reliably tree-shaken. Bundlephobia's module format indicator is the fastest way to check: a package listed as "CJS only" will be included in full regardless of how many named imports you use.

The practical workflow is to check each large dependency's actual bundled output rather than trusting documentation claims. Running a bundle analyzer before and after adding a new dependency shows the real cost. Checking bundlephobia.com for the package shows its module format and tree-shaking support signal. For libraries you already depend on, comparing import { specificFunction } from 'library' against import { specificFunction } from 'library/specificFunction' in your bundle output will reveal whether the barrel-based import is pulling in more than you expect.

How Framework Choice Creates a Bundle Floor You Can't Escape

When developers argue about framework bundle sizes, the comparison is usually framed as a total cost — React at 45KB versus Svelte at near-zero. This framing obscures something important: every framework establishes a minimum floor of JavaScript that ships to users regardless of what the application does, and the gap between frameworks only matters relative to everything else in the bundle.

React's 45KB is immovable. You cannot partially import React. The reconciler, the fiber architecture, synthetic events, and hooks runtime all ship together. Switching from React to Preact (4KB) saves 41KB — a real saving worth pursuing if you're building a simple widget or marketing page where React's ecosystem isn't needed. But for a full SaaS application that uses React Router, TanStack Query, a UI component library, and a handful of third-party integrations, the 41KB difference between React and Preact is often less than the cost of one misconfigured import.

Angular's floor is higher and more consequential. The Angular framework itself, including its DI container, change detection system, and required decorators, contributes roughly 150KB gzipped to the initial bundle — before a single line of application code. This is genuinely significant and one reason Angular is rarely used for consumer-facing applications where first-load performance is critical. Svelte's near-zero runtime is a meaningful advantage for content-heavy sites where JavaScript is used sparingly, but Svelte applications that use routing, state management, and third-party components accumulate their own bundle overhead through the libraries those features require.

The practical implication is that framework selection matters most at the extremes. Choosing Angular for a performance-critical consumer application has a real cost that no amount of code splitting can fully offset. Choosing Svelte for a simple interactive widget eliminates the framework overhead entirely. But for the broad middle ground — a React application with 40-60 npm dependencies — the framework's baseline cost is fixed and the variable cost (third-party libraries, application code, lazy-loading decisions) is where performance is actually won or lost. This doesn't make the framework choice unimportant; it makes it a one-time decision that sets the floor, after which per-route optimization determines the ceiling.

Core Web Vitals and the Real Business Impact of Bundle Size

Core Web Vitals — Google's set of user experience metrics that include Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — are direct signals in Google's search ranking algorithm. For any application where organic search traffic matters, Core Web Vitals are not an abstract engineering concern. A site that consistently scores poorly on LCP and INP will rank below competitors who score well, all else being equal. The connection between bundle size and these metrics is direct and measurable.

Largest Contentful Paint measures how long it takes for the largest visible element in the viewport to render. For JavaScript-heavy applications, LCP is frequently bottlenecked by the time required to download, parse, and execute the main bundle before the framework can begin rendering. Every 100KB removed from the initial JavaScript payload typically reduces LCP by 200–400ms on median mobile hardware — a rough heuristic, but one that holds consistently across Lighthouse audits on real applications. Moving the LCP from 4 seconds to 2.5 seconds can be the difference between a "Needs Improvement" score and a "Good" score in Google Search Console.

Interaction to Next Paint, which replaced First Input Delay in 2024, measures the latency between a user interaction and the browser's next paint response. JavaScript-heavy pages that haven't finished executing their bundles respond slowly to user input — the main thread is occupied parsing and compiling scripts, not responding to events. This shows up in INP scores as high-latency interactions during the first few seconds of page load. Code splitting addresses this directly by deferring the parsing of JavaScript that isn't needed until after the page becomes interactive.

The field data versus lab data distinction matters for interpreting these metrics. Lighthouse in CI gives you lab data — synthetic measurements from a simulated device and connection. Google Search Console shows field data — aggregated real user measurements from Chrome users actually visiting your site. Lab data is useful for catching regressions in CI before they reach production. Field data shows what your actual users experience. For sites with significant mobile traffic from regions with slower median connection speeds, the field data often reveals LCP and INP scores that look much worse than lab measurements suggest, because the simulated device and connection are still faster than a meaningful portion of your real audience.

Per-Route Code Splitting as the Primary Optimization Lever

If there is a single optimization that produces the most consistent improvement in real-world application performance, it is per-route code splitting implemented correctly. The principle is straightforward: users visit one route at a time, so the JavaScript loaded for routes they haven't visited yet is pure overhead on the routes they have visited. Delivering only the code a user needs for the current route is the most direct possible optimization of their experience.

In Next.js with the App Router, per-route splitting is automatic and aggressive. Each page segment is a separate chunk, and layouts, templates, and page components are split at every route boundary. The first-load JavaScript reported in the build output — the "shared by all" baseline — includes the React runtime, the Next.js router, and modules imported at the root layout level. Everything below that is route-specific. The discipline this creates is important: any import added to the root layout becomes part of every user's initial load. A single heavy import at the top of the component tree can undo thousands of lines of optimization work elsewhere.

Vite's code splitting with React Router follows a similar model but requires more intentional configuration. Routes defined with React.lazy() are automatically split into separate chunks. The important detail is that Suspense boundaries must wrap lazy routes to handle the loading state during chunk fetch. Without Suspense, lazy loading fails silently or throws runtime errors. The combination of lazy() + Suspense + prefetch hints for likely-next routes produces an experience that feels instant: the current route loads quickly because its chunk is small, and navigating to the next route feels instant because it was prefetched in the background.

The common antipattern that defeats route splitting is importing heavy components at the layout level rather than the page level. A chart component imported in a shared layout is included in the bundle for every route that uses that layout — even routes that display no charts. This single mistake can add 50–150KB to every route's initial load. The audit is simple: run the Next.js build output and look at the "First Load JS" column. Any route that's dramatically larger than the others — 3x or more — is pulling in something it shouldn't, or is missing lazy loading for a heavy component.

The prefetch behavior is equally important to get right. Prefetching too aggressively wastes bandwidth on routes users may never visit. Not prefetching at all creates visible loading delays on navigation. The optimal strategy is to prefetch routes that users are likely to navigate to next based on the current page's structure — the links in the primary navigation, the next step in a wizard, the details page linked from a list. Next.js handles this automatically for <Link> components in the viewport. For Vite applications, react-router's loader preloading and import() with /* @vite-prefetch */ comments give fine-grained control over which chunks are prefetched and when.

Check bundle sizes for any npm package at PkgPulse.

See also: Bun vs Vite and AVA vs Jest, How Package Popularity Correlates with Bundle Size.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.