Why Bundle Size Still Matters in 2026
TL;DR
Every 100KB of JavaScript costs ~1 second of interaction delay on median mobile hardware. 5G didn't solve the problem because the bottleneck shifted from download to parse and execute time. A 500KB bundle downloads in 250ms on 4G but takes 3-5 seconds to parse on a 2022 mid-range Android phone. Google's Core Web Vitals (specifically INP — Interaction to Next Paint) make this measurable and SEO-relevant. Bundle size is not a micro-optimization — it's the primary lever on perceived performance.
Key Takeaways
- 100KB gzipped ≈ 300KB uncompressed — gzip saves bandwidth; the browser still parses 300KB
- INP (Interaction to Next Paint) — replaced FID in Core Web Vitals; directly tied to JS size
- Median mobile phone — ~4x slower JS execution than a MacBook Pro
- 1s INP improvement → 7% conversion increase — Google's data from e-commerce analysis
- Tree-shaking + code splitting — the two levers that matter most
The Network Fallacy
"Connections are faster now, so bundle size doesn't matter as much."
This is wrong. Here's why:
2026 network speeds (global median):
Download: ~50 Mbps (10x faster than 2015)
Latency: ~20ms (similar to 2015)
What changed:
- Download time for a 500KB bundle: ~0.08s (fast!)
- Parse + compile time on Android mid-range: ~3.2s (same as 2015)
The bottleneck shifted from network to CPU.
Fast networks mean we download more JavaScript.
CPU performance has not kept pace with JS bundle growth.
JavaScript shipped per page (httparchive.org data):
2018: ~380KB median (gzipped)
2021: ~450KB median
2023: ~500KB median
2026: ~580KB median
The amount of JavaScript is growing faster than CPU performance.
The Real Cost of JavaScript
Cost analysis: 500KB gzipped JavaScript bundle
On a high-end MacBook (M3 Pro):
├── Download: 0.08s
├── Decompress: 0.02s
├── Parse: 0.15s
├── Compile (JIT): 0.08s
└── Execute: 0.12s
Total: ~0.45s ✅
On a median Android phone (2022 mid-range):
├── Download: 0.25s (slower network)
├── Decompress: 0.08s
├── Parse: 0.65s (4x slower CPU)
├── Compile (JIT): 0.45s
└── Execute: 0.50s
Total: ~1.93s ❌ (INP impact)
This is why "it's fast on my machine" doesn't mean "it's fast for your users."
The CPU gap between developer hardware and user hardware has widened rather than narrowed over the past decade. JavaScript engines have improved, but developer hardware has improved faster. Apple Silicon M-series chips execute JavaScript roughly 4x faster than the mid-range Android devices that represent the global median. A developer benchmarking on an M2 Mac and concluding "the bundle is fine" is testing in conditions that approximately 5% of their users experience. The other 95% are on something between 2x and 8x slower.
This gap has a specific implication for how performance testing should be done. Browser DevTools allow CPU throttling (4x or 6x slowdown simulation) that approximates mid-range device performance without requiring a physical device. Running a Lighthouse audit with 4x CPU throttle on a mobile viewport gives a realistic estimate of what median users experience. This should be the baseline test, not the unthrottled developer machine result.
Bundle Size by Package (What You're Actually Installing)
Selected package gzipped bundle contributions:
Framework/UI:
React + ReactDOM: ~45KB
Lodash (full): ~70KB ← Don't install full lodash
Lodash (cherry-pick): ~2KB ← Import specific functions
moment.js: ~72KB ← Use date-fns (~3KB) or Luxon (~18KB)
date-fns (full): ~75KB ← Use named imports for tree-shaking
date-fns (tree-shook): ~8KB ← import { format } from 'date-fns'
State Management:
Redux Toolkit: ~12KB
Zustand: ~1KB ← 12x smaller than Redux
Jotai: ~3KB
GraphQL Clients:
Apollo Client: ~47KB
urql: ~14KB ← 3x smaller
graphql-request: ~5KB ← 9x smaller
HTTP:
axios: ~14KB
ky: ~4KB
native fetch: 0KB ← Use this
The package size comparison table above reveals a pattern that developers consistently underestimate: state management libraries are excellent candidates for size optimization because the alternatives are dramatically smaller. Redux Toolkit at 12KB is not large by absolute standards, but Zustand at 1KB providing equivalent functionality for most use cases represents a 12x reduction with no feature sacrifice for the majority of state management scenarios. The calculation changes when Redux's specific features — complex middleware, time-travel debugging, deeply nested reducer composition — are actively used. But many applications use Redux Toolkit in a way that Zustand's simpler model handles completely.
HTTP clients are the other consistent quick win. The global fetch is available in Node.js 18+, all modern browsers, Cloudflare Workers, Bun, and Deno. For the majority of API call patterns — JSON request/response with headers and error handling — native fetch requires 0KB added to the bundle. Axios at 14KB is justified when its interceptor system, automatic JSON serialization, or request cancellation features are actively used. For basic API calls, the 14KB is overhead that generates no user-facing value.
Measuring Your Bundle
# Vite bundle analysis
npm install --save-dev rollup-plugin-visualizer
# vite.config.ts
import { visualizer } from 'rollup-plugin-visualizer';
export default defineConfig({
plugins: [
visualizer({
open: true, // Opens in browser after build
gzipSize: true, // Shows gzipped sizes
brotliSize: true,
}),
],
});
# Run: vite build
# Opens: dist/stats.html — interactive treemap of your bundle
# bundlephobia — check before installing
# https://bundlephobia.com/package/lodash@4.17.21
# Output: 71.5 kB gzip | 329 downloads/week per KB
# npmjs.com bundle size badge
# Most package pages show bundle size in the sidebar
# @next/bundle-analyzer — for Next.js
npm install @next/bundle-analyzer
# next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({ enabled: true });
module.exports = withBundleAnalyzer({});
# Run: ANALYZE=true next build
The Two Levers: Tree-Shaking and Code Splitting
1. Tree-Shaking (Eliminate Dead Code)
// ❌ Imports entire lodash — 70KB
import _ from 'lodash';
const result = _.groupBy(data, 'category');
// ✅ Cherry-pick: ~2KB
import groupBy from 'lodash/groupBy';
// ✅ Named import (if package supports tree-shaking):
import { groupBy } from 'lodash-es'; // ESM lodash
// ❌ date-fns bad import
import dateFns from 'date-fns';
// ✅ date-fns tree-shaken (3KB vs 75KB)
import { format, parseISO, differenceInDays } from 'date-fns';
// Check if a package is tree-shakeable:
// 1. Does it have "sideEffects": false in package.json?
// 2. Does it have an ESM build?
// 3. Does bundlephobia show a smaller size with named imports?
// Example: Material UI
import Button from '@mui/material/Button'; // ✅ Tree-shaken: ~15KB
import { Button } from '@mui/material'; // ✅ Also fine with MUI's proper exports
import * as MUI from '@mui/material'; // ❌ Entire library: ~300KB
2. Code Splitting (Load Later)
// Next.js — dynamic imports (code splitting)
import dynamic from 'next/dynamic';
// This component is split into a separate chunk
// Only loaded when user visits /admin
const AdminDashboard = dynamic(() => import('./AdminDashboard'), {
loading: () => <Skeleton />,
ssr: false, // Don't SSR admin components
});
// Vite — automatic code splitting with dynamic import
// Any dynamic import() creates a separate chunk
const chart = await import('./ChartComponent');
// Route-based splitting (React Router)
const routes = [
{
path: '/dashboard',
lazy: () => import('./pages/Dashboard'), // Loaded on demand
},
];
Tree-shaking is the higher-leverage lever of the two for most applications, because it reduces the cost of packages already in the dependency graph without requiring architectural changes. Code splitting reduces what loads on the critical path but requires application structure that supports deferred loading — route-based splitting works well, but component-level splitting requires thoughtful placement of Suspense boundaries and loading states.
The tree-shaking failure mode that teams encounter most often is the barrel file problem. A barrel file (index.ts that re-exports from multiple modules) appears in import { Button, Input, Select } from './components'. To the bundler, this looks like a single import from a module that might have side effects. Without explicit "sideEffects": false in the component library's package.json, the bundler conservatively includes everything exported from the barrel rather than just the three named components. The fix is to either configure sideEffects: false (if you maintain the library) or to import from specific files (import { Button } from './components/Button') rather than the barrel. Teams that run bundle analyzers for the first time frequently discover that 30-50% of their bundle weight traces back to this barrel import pattern in internal UI libraries.
Code splitting becomes essential for applications with large feature surfaces — admin dashboards, data-heavy reporting pages, editor-heavy content creation tools. A user accessing the login page has no reason to download the PDF generation library used only in the invoice export feature. React's lazy() and Suspense make component-level code splitting declarative; Next.js's dynamic() wrapper handles the SSR considerations automatically. The rule: any feature that is accessed by fewer than 50% of users in a session is a candidate for code splitting.
Core Web Vitals and Bundle Size
INP (Interaction to Next Paint) — the metric that measures JS impact:
Good: < 200ms
Needs improvement: 200-500ms
Poor: > 500ms
How bundle size affects INP:
1. Large initial bundle → long parse time → delayed interactivity
2. Synchronous scripts → blocking main thread → slow first interaction
3. Large event handlers → slow response to clicks/taps
Google's data:
- Sites in "Good" INP (< 200ms) have ~50% higher conversion rates
- 100ms INP improvement → ~2% revenue increase for e-commerce
Quick Wins Checklist
# Immediate wins (1-2 hours each):
□ Replace moment.js with date-fns or Luxon
Savings: ~65KB (gzipped)
□ Replace lodash with targeted imports or radash
Savings: ~40-60KB depending on usage
□ Remove unused icon packs (use only icons you need)
Savings: ~20-100KB (icon libraries are huge)
□ Enable dynamic imports for modals/dialogs
Savings: those components + their deps, loaded on demand
□ Use native fetch instead of axios
Savings: ~14KB
□ Switch from Apollo Client to urql or graphql-request
Savings: ~30-42KB
□ Run bundle analyzer to find unexpected large deps
Often finds: polyfills you don't need, duplicate packages, forgotten test utils
INP's introduction as a Core Web Vitals metric in March 2024 directly linked JavaScript bundle size to SEO ranking in a way that page speed (FCP/LCP) alone did not fully capture. LCP is primarily determined by server response time and asset delivery — a well-configured CDN can achieve good LCP even with a large JavaScript bundle, because LCP measures the time to the largest content element visible in the viewport, not the time until the page is interactive. INP measures interaction responsiveness after load, which is directly shaped by how much JavaScript is competing for the main thread.
This distinction matters for prioritization. A development team that optimized for LCP and achieved "good" scores may still have a poor INP if their bundle is large enough to create main thread contention during user interactions. The two metrics require different optimization strategies: LCP responds to CDN, caching, and SSR improvements; INP responds to JavaScript reduction, code splitting, and deferring non-critical script execution. In 2026, both metrics appear in Google Search Console performance reports and both affect search ranking.
The "Quick Wins" framing is accurate because these are genuinely low-effort, high-return improvements — the kind that require a few hours of work but produce measurable performance improvements that Lighthouse and Web Vitals data will reflect. The order matters. Replacing moment.js with date-fns named imports typically takes 2-4 hours for a medium-sized codebase and removes 65KB from the initial bundle. That 65KB has a larger performance impact than removing a 5KB utility you've never heard of, so prioritize by size impact rather than by ease.
The icon library problem is underappreciated. Developers frequently install a complete icon library (@heroicons/react, lucide-react, react-icons) and import icons using the recommended API, which should tree-shake correctly. But react-icons in particular ships as a large barrel package where tree-shaking support varies by the sub-collection used. A project that uses 15 icons from react-icons might end up bundling 200+ icons if the specific subpackage doesn't support tree-shaking. Running a bundle analyzer after installing any icon library is a recommended step that catches this category of problem immediately.
Polyfills are another hidden bundle inflation source. Projects bootstrapped with older tooling configurations may still include polyfills for features that have been universally supported for years. IE 11 polyfills, for example, add significant weight and are unnecessary for any project that no longer targets IE. The @babel/preset-env configuration, browserslist queries, and any explicit polyfill packages in package.json are worth auditing to confirm they match the actual browser support targets. Projects that inherited an old webpack configuration from 2019 sometimes discover 50-100KB of polyfills they've been shipping to users for years without realizing it.
Dynamic imports create another category of quick wins for applications with clear navigation patterns. A settings page, an admin panel, an invoice generation feature — these are all candidates for React.lazy() wrapping because they're accessed infrequently and have self-contained functionality that bundles into a separately loadable chunk. The implementation is typically 5-10 lines of code change per feature, and the result is that users who never visit those features don't pay the JavaScript parse cost for them.
The Mobile Reality: Where Bundle Size Actually Hurts
Over 60% of global web traffic arrives from mobile devices, but developer testing habits have not caught up with that reality. When developers benchmark their application's performance, they typically test on the device in front of them — an M3 MacBook connected to fiber or fast office Wi-Fi. That test environment is representative of roughly the top 5% of your users' hardware. The other 95% are on something slower.
The global median device for web browsing is a mid-range Android phone from 2021 or 2022 — not an iPhone 15 Pro Max. On that device, the JavaScript parsing and compilation cost for a 1MB uncompressed JavaScript payload is between one and four seconds, depending on the chipset. Gzip compression helps with download time but not with parse time: the browser still has to decompress and parse the full uncompressed byte count. A 200KB gzipped bundle is still 600KB of JavaScript that the CPU needs to work through before the page becomes interactive.
Every 100KB increase in JavaScript payload costs approximately 200 milliseconds of parse time on a mid-range Android device. That number compounds: two poorly-chosen dependencies can easily add 500ms of interaction delay on devices that represent the majority of your user base.
The network picture is similarly nuanced. Emerging markets — where a significant fraction of global internet growth is occurring — often have 4G connections that are technically fast but high-latency, or LTE connections that are fast when the signal is strong but degrade significantly in crowded areas. Assuming consistent 4G throughput for your users in Southeast Asia, Latin America, or sub-Saharan Africa leads to the same testing fallacy as assuming M3 MacBook CPU performance.
The business stakes are concrete. Google's Core Web Vitals incorporate Interaction to Next Paint (INP) as a ranking signal. A large initial bundle directly delays INP by keeping the main thread busy with parse and compile work. Studies of e-commerce sites show that 100ms of latency reduction correlates with approximately a 1% improvement in conversion rate. At scale, bundle bloat is not an engineering nicety — it is a revenue and SEO variable.
Build Tooling Impact: Vite vs webpack vs esbuild
The choice of bundler shapes how much effort is required to achieve a lean bundle, but modern bundlers converge on similar final sizes when properly configured. The differences are in how much work falls on the developer versus the tool.
Vite uses Rollup under the hood for production builds. Rollup's tree-shaking is aggressive and scope-hoisting aware — it can eliminate dead code across module boundaries, not just within individual files. Code splitting in Vite is automatic: any dynamic import() call creates a separate chunk. The result is that a default Vite project achieves reasonable bundle discipline without manual configuration. The rollup-plugin-visualizer generates an interactive treemap of the bundle after each build, making it straightforward to identify what is contributing weight.
webpack 5 achieves comparable tree-shaking with optimization.usedExports: true and optimization.sideEffects: true in the configuration. The difference is that webpack requires more explicit configuration to get there — out of the box, it is more conservative about eliminating code it cannot statically prove is unused. For Next.js projects (which use webpack 5 by default), @next/bundle-analyzer is the standard diagnostic tool.
esbuild is an outlier in speed: it transforms JavaScript faster than any other tool. But esbuild's tree-shaking is less aggressive than Rollup's, and it lacks automatic code splitting. It is best suited for development builds, CLI tools, and small projects where build speed matters more than final bundle optimization.
The factor that actually ruins tree-shaking across all bundlers — and is responsible for the majority of "I thought tree-shaking was supposed to help" complaints — is barrel exports — files that re-export everything from a directory (export * from './components'). Barrel files prevent bundlers from determining which specific exports are used at build time, causing entire modules to be included even when only one function is needed. This is the root cause of the surprising bundle sizes developers find when they run a bundle analyzer for the first time. Common finding in bundle audits: 30 to 40% of the total bundle is attributable to untreated barrel imports from internal packages or third-party UI libraries.
Setting Your 2026 Bundle Budget
A budget without enforcement is a suggestion. Bundle budgets only work when they are checked automatically in CI, surfacing regressions as PR comments before they merge.
The recommended starting budget for a content or SaaS application in 2026: initial JavaScript (the JS loaded on first page visit) under 100KB gzipped, total JavaScript (including lazily loaded chunks) under 200KB gzipped. These numbers reflect what is achievable with a well-structured Next.js or Vite application using modern dependencies — they are not aspirational, they are practical. Projects that exceed them are typically carrying one or two large dependencies that have smaller alternatives.
The size-limit npm package provides a straightforward way to enforce this in CI. A configuration file specifies per-file or per-route size limits; the package fails the CI run if any limit is breached. The configuration is typically a JSON file in the project root with entries like { "path": "dist/index.js", "limit": "100 kB" }. Bundlesize and Lighthouse CI with budget configuration files are alternatives that integrate differently depending on the existing CI setup.
The practical improvement process: run a bundle analyzer on the current build to see the treemap, identify the three largest contributors, check each on bundlephobia.com for smaller alternatives, make the replacements, and measure again. Most projects can achieve a 20 to 30% reduction in initial JS in a single focused sprint. The most common wins are replacing moment.js with date-fns named imports, eliminating full lodash in favor of cherry-picked imports, and swapping Apollo Client for a smaller GraphQL client when the Apollo-specific features are not in use.
Enforcing budgets makes the next sprint unnecessary — regressions get caught before they accumulate into the state that required the audit in the first place.
The cultural dimension of bundle budgets is often overlooked. When bundle size is invisible in the development workflow — not measured in CI, not surfaced in code review, not visible as a metric — engineers make package choices without considering their cost. Adding a 50KB dependency to solve a two-hour implementation problem feels like a reasonable trade-off when considered in isolation. When the PR shows that the change increases the initial bundle by 15%, the conversation shifts: is this 50KB worth the INP penalty for every user on every page load, indefinitely? Sometimes it is. Often there's a smaller alternative that wasn't considered because the search stopped at "does it solve the problem?" rather than "does it solve the problem efficiently?" Automated bundle size visibility in the development workflow changes the question that gets asked — and therefore the decisions that get made.
Compare package sizes on PkgPulse — bundle size data included for all comparisons.
See also: Bun vs Vite and AVA vs Jest, Why Bundle Size Matters More Than Your Framework Choice.
See the live comparison
View vite vs. webpack on PkgPulse →