ofetch vs undici in 2026: Server-Side HTTP Compared
TL;DR
ofetch for universal Nuxt/Nitro apps; undici for maximum Node.js HTTP performance. ofetch (~5M weekly downloads) is the unified HTTP client from the Nuxt team — works in browsers, Node.js, and edge runtimes with consistent behavior. undici (~18M downloads) is Node.js's official HTTP/1.1 client built for performance — faster than Node's built-in http module and the basis for native fetch in Node.js 18+.
Key Takeaways
- undici: ~18M weekly downloads — ofetch: ~5M (npm, March 2026)
- undici is Node.js official — included as the basis for
node:fetch - ofetch works universally — browser, Node.js, and edge runtimes
- undici is faster — designed for high-throughput Node.js servers
- ofetch has better DX — automatic JSON, retries, error handling
What Each Is For
undici:
- Node.js official HTTP client (from Node.js core team)
- Maximum HTTP performance for Node.js servers
- Low-level control over connection pools, pipelining
- Powers Node.js 18+ built-in fetch
- Best for: high-throughput server-to-server requests
ofetch:
- Universal fetch wrapper from Nuxt team
- Consistent behavior across environments
- Used internally by Nuxt, Nitro, and H3
- Smart defaults (JSON, error handling, retries)
- Best for: universal apps, Nuxt projects, readable code
The key distinction is where each library targets. undici is a Node.js-specific library — it uses Node.js stream APIs and connection pooling features that don't exist in browsers or edge runtimes. ofetch is environment-agnostic — it uses the Fetch API standard and adds developer-friendly defaults on top, running identically whether you're in a browser, Cloudflare Workers, or Node.js.
API Comparison
// undici — low-level but fast
import { fetch, request, pipeline, stream } from 'undici';
// Using undici's high-performance request
const { statusCode, body } = await request('https://api.example.com/users', {
method: 'GET',
headers: { Authorization: `Bearer ${token}` },
});
const data = await body.json();
// Connection pool for high-throughput (key undici advantage)
import { Pool } from 'undici';
const pool = new Pool('https://api.example.com', {
connections: 10, // 10 concurrent connections
pipelining: 3, // HTTP pipelining
});
// 100 requests reusing the pool:
const results = await Promise.all(
ids.map(id => pool.request({ path: `/users/${id}`, method: 'GET' })
.then(({ body }) => body.json())
)
);
// ofetch — developer-friendly universal client
import { ofetch } from 'ofetch';
// GET — automatic JSON, type-safe
const users = await ofetch<User[]>('/api/users', {
baseURL: 'https://api.example.com',
headers: { Authorization: `Bearer ${token}` },
});
// users is User[] — typed
// POST — automatic JSON serialization
const user = await ofetch<User>('/api/users', {
method: 'POST',
body: { name: 'Alice', email: 'alice@example.com' },
// body is automatically stringified to JSON
});
// Error handling — throws FetchError on 4xx/5xx
try {
await ofetch('/api/missing');
} catch (err) {
if (err.response) {
console.log(err.status); // 404
console.log(err.data); // Parsed error response body
}
}
The developer experience difference is significant for day-to-day code. ofetch's automatic JSON parsing means you never forget to call .json() on a response body. Its automatic error throwing on 4xx/5xx responses (with the response body attached to the error) eliminates the need for manual status code checking. These defaults are opinionated but correct for the vast majority of API calls.
undici's API is lower-level by design. The request() function returns a stream you must explicitly consume — body.json() or body.text(). This gives you more control (you can stream large responses without loading them into memory) but requires more code for simple use cases.
Nuxt / Nitro Integration
// ofetch is used internally by Nuxt and $fetch is the global
// No import needed in Nuxt components:
const data = await $fetch('/api/users');
// In Nuxt server routes (auto-imported):
export default defineEventHandler(async (event) => {
// $fetch works on both server and client
const user = await $fetch<User>(`/api/users/${event.context.params.id}`);
return user;
});
// useFetch composable (wraps ofetch + reactivity):
const { data, pending, error } = await useFetch('/api/users');
If you're using Nuxt 3, ofetch is already your HTTP client — it's the implementation behind $fetch and useFetch. The consistent behavior is the key benefit: when a Nuxt page uses $fetch('/api/users'), it runs on the server during SSR (making a local function call) and on the client during navigation (making an HTTP request to the same URL). The behavior is identical in both environments.
Performance
Benchmark: 10,000 POST requests to localhost (Node.js)
Client | Req/sec | Notes
----------------|----------|--------
undici Pool | 45,000 | Connection pooling
undici fetch | 38,000 | Standard fetch API
node-fetch | 22,000 | Legacy wrapper
ofetch | 20,000 | Universal, some overhead
Axios | 15,000 | XMLHttpRequest based
undici is 2-3x faster than alternatives for server-to-server.
For typical web app use, the difference is negligible.
The performance advantage of undici's Pool is most significant for services that make many repeated requests to the same host. A Node.js service that calls a backend API on every incoming request benefits enormously from connection pooling — rather than establishing a new TCP connection per call, it reuses persistent connections from the pool.
For applications making fewer than 1,000 requests per second to external APIs, the difference between undici and ofetch is negligible. Both are fast enough that HTTP client overhead won't be your bottleneck.
Error Handling Patterns
// ofetch — throws on 4xx/5xx with response attached
async function getUser(id) {
try {
return await ofetch(`/api/users/${id}`);
} catch (err) {
if (err.status === 404) return null;
if (err.status === 401) throw new AuthenticationError();
throw err; // Re-throw unexpected errors
}
}
// ofetch — retry configuration
const api = ofetch.create({
baseURL: 'https://api.example.com',
retry: 3, // Retry up to 3 times
retryDelay: 500, // 500ms between retries
retryStatusCodes: [429, 500, 502, 503, 504], // Status codes to retry
});
// undici — manual error handling
async function getUser(id) {
const { statusCode, body } = await request(`/api/users/${id}`);
if (statusCode === 404) return null;
if (statusCode === 401) throw new AuthenticationError();
if (statusCode >= 500) throw new ServerError(statusCode);
return body.json();
}
ofetch's built-in retry logic is a practical advantage for production applications that call external APIs. Network hiccups, rate limit errors, and transient server errors are common, and implementing retry logic correctly (with exponential backoff, jitter, and status code filtering) is non-trivial. ofetch handles this with a few configuration options.
Creating API Clients
// ofetch — create a configured instance
import { $fetch } from 'ofetch';
export const apiClient = $fetch.create({
baseURL: process.env.API_BASE_URL,
headers: { 'X-API-Key': process.env.API_KEY },
retry: 2,
onResponseError({ response }) {
if (response.status === 401) {
// Handle token refresh
}
},
});
// Usage:
const users = await apiClient<User[]>('/users');
const post = await apiClient<Post>('/posts', { method: 'POST', body: data });
// undici — Pool-based client
import { Pool } from 'undici';
const api = new Pool(process.env.API_BASE_URL, {
connections: 5,
headers: { 'X-API-Key': process.env.API_KEY },
});
async function get<T>(path: string): Promise<T> {
const { statusCode, body } = await api.request({ path, method: 'GET' });
if (statusCode >= 400) throw new Error(`API error: ${statusCode}`);
return body.json() as T;
}
When to Choose
Choose undici when:
- High-throughput Node.js microservice (10K+ requests/second)
- You need connection pooling for repeated requests to same host
- Maximum raw HTTP performance is the priority
- You're making HTTP/2 or HTTP pipelining optimizations
Choose ofetch when:
- Building with Nuxt or Nitro framework
- Universal app that runs on browser and server
- Developer ergonomics over raw performance
- You want consistent behavior across environments
- You need automatic retry, JSON parsing, and error handling
Choose native fetch when:
- Node.js 18+ and you need basic HTTP with no additional features
- Edge runtime compatibility is required
- Zero dependencies is a hard constraint
Community Adoption in 2026
ofetch reaches approximately 5 million weekly downloads, driven primarily by Nuxt's ecosystem. Every Nuxt 3+ application uses ofetch through the $fetch global, making it one of the most deployed HTTP libraries in production even though many developers interact with it through the Nuxt abstraction. Its universal behavior — consistent JSON handling, retry logic, and error normalization across Node.js, browser, and edge runtimes — makes it the smoothest option for applications that run code in multiple environments.
undici reaches approximately 8 million weekly downloads and has been bundled with Node.js since v18 (as the implementation behind fetch). Direct undici imports are primarily used by library authors and infrastructure teams who need connection pooling (Pool), HTTP/2 support, or raw performance for high-throughput server-to-server communication. Its Pool class can achieve 2-3x the throughput of standard fetch for sustained request streams to a single host.
Native fetch (Node.js 18+) is not tracked separately on npm but represents the baseline available without any install. For straightforward use cases — single requests, simple JSON APIs, edge functions — native fetch is often sufficient. The primary reasons to reach for ofetch over native fetch are retry logic, consistent error throwing on non-2xx responses, and the $fetch.create() instance API.
Migration Guide
From Axios to ofetch for universal apps
// Axios (old) — doesn't work in edge runtimes
import axios from "axios"
const { data } = await axios.get<Package>("https://api.pkgpulse.com/packages/react")
// ofetch (new) — works in Node.js, browser, and edge runtimes
import { ofetch } from "ofetch"
const data = await ofetch<Package>("https://api.pkgpulse.com/packages/react")
// Auto-parses JSON; throws FetchError with response attached on 4xx/5xx
The key behavioral difference: ofetch throws on non-2xx status codes by default (like Axios), while native fetch does not. Ofetch also serializes request bodies to JSON automatically when passed as objects, eliminating the JSON.stringify + Content-Type boilerplate.
Edge Runtime Compatibility and Bundle Considerations
The choice between ofetch and undici has significant implications for where your code can run and how it affects bundle size.
Edge runtime compatibility is a key differentiator. ofetch is designed for universal execution and works in browser environments, Node.js, Deno, Bun, and edge runtimes (Cloudflare Workers, Vercel Edge Runtime, Netlify Edge Functions). Its fetch implementation uses the platform's native fetch when available and falls back to node-fetch or similar polyfills. undici, by contrast, is a Node.js-specific HTTP client that uses Node.js's TCP socket APIs — it cannot run in browser environments or Cloudflare Workers, which use a V8 isolate without Node.js APIs.
Bundle size impact is significant for browser-included code. undici is a large package (~500KB unpacked) because it includes a full HTTP/1.1 and HTTP/2 client implementation. It should never be bundled for browser delivery — it is for server-side Node.js only. ofetch, built on native fetch with minimal additions, adds roughly 15-20KB to a client bundle. For Next.js API routes and server components, undici's size does not matter (it is never sent to the browser), but for universal isomorphic code, ofetch is the appropriate choice.
HTTP/2 multiplexing is undici's primary advantage for high-throughput server-to-server communication. undici maintains persistent connection pools with HTTP/2 multiplexing enabled by default for HTTPS connections to servers that support it. Multiple requests to the same origin share a single TCP connection, reducing connection setup overhead and improving latency under load. ofetch uses standard fetch which, in Node.js 21+, is backed by undici internally — so the performance gap between ofetch (using native fetch) and undici directly has narrowed, though using undici's Pool directly still provides more control over connection management.
Error context and debugging differ meaningfully. undici throws UndiciError subclasses (like ConnectTimeoutError, BodyTimeoutError) that contain the request URL, method, and timing information. ofetch throws FetchError objects with the parsed response body included. For diagnosing production issues, ofetch's inclusion of the response body in the error is more actionable for API error debugging, while undici's low-level error types are more useful for diagnosing network-level failures like connection timeouts in service-to-service calls.
Compare ofetch and undici package health on PkgPulse. Also see our Axios vs ky vs undici comparison and best Node.js API frameworks.
See the live comparison
View ofetch vs. undici on PkgPulse →