Skip to main content

Guide

SSE vs WebSocket vs Long Polling 2026

Compare Server-Sent Events, WebSockets, and long polling for real-time web communication. When to use each, Next.js route handlers, AI streaming, TypeScript.

·PkgPulse Team·
0

TL;DR

Server-Sent Events (SSE) are the best choice for one-directional server-to-client streaming — they're built on HTTP, reconnect automatically, work through proxies, and are perfect for notifications, live feeds, and AI streaming responses. WebSockets are the best choice for true bi-directional communication — chat, collaborative editing, multiplayer games, and live cursors. Long polling is the legacy fallback — works everywhere but inefficient. In 2026, SSE covers 80% of real-time use cases that developers reach for WebSockets for, with far less infrastructure complexity.

Key Takeaways

  • SSE: Built into browsers, HTTP/1.1+, auto-reconnect, one-way (server → client), proxies ✅
  • WebSocket: Full-duplex, separate protocol, no auto-reconnect, firewall/proxy issues
  • Long polling: Universal compatibility, but inefficient — a HTTP request per update
  • AI streaming (ChatGPT, Vercel AI SDK) uses SSE — not WebSockets
  • SSE is free on Vercel Edge Functions and Cloudflare Workers — WebSockets require upgrades
  • Use WebSockets only when you need the client to send frequent messages back to the server

Protocol Comparison

Server-Sent Events (SSE):
  Client ──── GET /stream ──→ Server
  Client ←── data: {...}\n\n── Server  (repeated, single long response)
  HTTP/1.1 — no protocol upgrade, just a long-lived response
  Auto-reconnect: built into EventSource API

WebSocket:
  Client ──── GET /ws ──────→ Server  (HTTP Upgrade request)
  Client ←─ 101 Switching ──→ Server  (protocol upgrade)
  Client ←──────────────────→ Server  (full-duplex binary frames)
  No auto-reconnect — must implement yourself

Long Polling:
  Client ──── GET /poll ────→ Server
  Server holds connection open until data available (or timeout)
  Client ←── response ────── Server
  Client ──── GET /poll ────→ Server  (immediately repeat)
  Very inefficient — one request per event

Server-Sent Events

Browser client (EventSource API)

// Browser: built-in EventSource API — no library needed

// Connect to SSE endpoint:
const eventSource = new EventSource("https://api.pkgpulse.com/packages/stream")

// Default event (text/event-stream with no `event:` type):
eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data)
  console.log("Package update:", data)
}

// Named events (server sends `event: health_update`):
eventSource.addEventListener("health_update", (event) => {
  const update = JSON.parse(event.data)
  updatePackageCard(update.name, update.score)
})

eventSource.addEventListener("alert", (event) => {
  const alert = JSON.parse(event.data)
  showNotification(alert.message)
})

// Error handling:
eventSource.onerror = (err) => {
  if (eventSource.readyState === EventSource.CLOSED) {
    console.log("Connection closed")
  }
  // EventSource auto-reconnects — you usually don't need to handle errors
}

// Close:
eventSource.close()

SSE with authentication (EventSource doesn't support headers)

// EventSource doesn't support custom headers — use query params or cookies:

// Option 1: token in URL (less secure, but works):
const eventSource = new EventSource(`/api/stream?token=${accessToken}`)

// Option 2: use fetch() with ReadableStream instead of EventSource:
async function connectSSE(onMessage: (data: unknown) => void) {
  const response = await fetch("/api/packages/stream", {
    headers: {
      Authorization: `Bearer ${accessToken}`,
      Accept: "text/event-stream",
    },
  })

  const reader = response.body!.getReader()
  const decoder = new TextDecoder()

  while (true) {
    const { done, value } = await reader.read()
    if (done) break

    const text = decoder.decode(value)
    const lines = text.split("\n")

    for (const line of lines) {
      if (line.startsWith("data: ")) {
        const data = JSON.parse(line.slice(6))
        onMessage(data)
      }
    }
  }
}

Next.js App Router SSE route handler

// app/api/packages/stream/route.ts

export const runtime = "edge"  // Works on edge, not just Node.js

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url)
  const packageName = searchParams.get("package") ?? "react"

  const stream = new ReadableStream({
    async start(controller) {
      const encoder = new TextEncoder()

      // Helper to send SSE events:
      function send(data: unknown, event?: string) {
        let message = ""
        if (event) message += `event: ${event}\n`
        message += `data: ${JSON.stringify(data)}\n\n`
        controller.enqueue(encoder.encode(message))
      }

      // Send initial data:
      const initial = await fetchPackageHealth(packageName)
      send(initial, "health_update")

      // Poll and send updates every 5 seconds:
      const interval = setInterval(async () => {
        try {
          const update = await fetchPackageHealth(packageName)
          send(update, "health_update")
        } catch (err) {
          send({ error: "Failed to fetch update" }, "error")
          clearInterval(interval)
          controller.close()
        }
      }, 5000)

      // Clean up when client disconnects:
      request.signal.addEventListener("abort", () => {
        clearInterval(interval)
        controller.close()
      })
    },
  })

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  })
}

AI streaming with SSE (the main use case in 2026)

// AI streaming uses SSE under the hood — this is how Vercel AI SDK works:

// app/api/chat/route.ts
import { StreamingTextResponse, OpenAIStream } from "ai"
import OpenAI from "openai"

const openai = new OpenAI()

export async function POST(request: Request) {
  const { messages } = await request.json()

  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
    stream: true,
  })

  // OpenAIStream converts the OpenAI stream to a ReadableStream
  // StreamingTextResponse sends it as SSE to the client
  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

// Client receives it as SSE and displays it incrementally:
// const { messages } = useChat({ api: "/api/chat" })

WebSocket

Node.js server (ws library)

import { WebSocketServer, WebSocket } from "ws"
import http from "http"

const server = http.createServer()
const wss = new WebSocketServer({ server })

// Track connected clients:
const clients = new Map<string, WebSocket>()

wss.on("connection", (ws, request) => {
  const userId = getUserIdFromRequest(request)
  clients.set(userId, ws)

  console.log(`Client connected: ${userId}`)

  // Receive messages from client:
  ws.on("message", (data) => {
    const message = JSON.parse(data.toString())

    switch (message.type) {
      case "subscribe_package":
        subscribeUserToPackage(userId, message.packageName)
        break

      case "chat_message":
        broadcastToRoom(message.roomId, {
          type: "chat_message",
          userId,
          text: message.text,
          timestamp: Date.now(),
        })
        break
    }
  })

  ws.on("close", () => {
    clients.delete(userId)
    unsubscribeUser(userId)
    console.log(`Client disconnected: ${userId}`)
  })

  ws.on("error", (err) => {
    console.error(`WebSocket error for ${userId}:`, err)
  })

  // Send welcome message:
  ws.send(JSON.stringify({ type: "connected", userId }))
})

function broadcast(data: unknown) {
  const message = JSON.stringify(data)
  for (const client of clients.values()) {
    if (client.readyState === WebSocket.OPEN) {
      client.send(message)
    }
  }
}

server.listen(8080)

Browser client with reconnection

// Browser WebSocket with auto-reconnect (EventSource reconnects automatically,
// WebSocket does NOT — you must implement it):

class ReconnectingWebSocket {
  private ws: WebSocket | null = null
  private reconnectDelay = 1000
  private maxDelay = 30000

  constructor(
    private url: string,
    private onMessage: (data: unknown) => void
  ) {
    this.connect()
  }

  private connect() {
    this.ws = new WebSocket(this.url)

    this.ws.onopen = () => {
      console.log("WebSocket connected")
      this.reconnectDelay = 1000  // Reset delay on successful connection
    }

    this.ws.onmessage = (event) => {
      const data = JSON.parse(event.data)
      this.onMessage(data)
    }

    this.ws.onclose = () => {
      console.log(`Disconnected. Reconnecting in ${this.reconnectDelay}ms...`)
      setTimeout(() => {
        this.reconnectDelay = Math.min(this.reconnectDelay * 2, this.maxDelay)
        this.connect()
      }, this.reconnectDelay)
    }

    this.ws.onerror = (err) => {
      console.error("WebSocket error:", err)
    }
  }

  send(data: unknown) {
    if (this.ws?.readyState === WebSocket.OPEN) {
      this.ws.send(JSON.stringify(data))
    }
  }

  close() {
    this.ws?.close()
  }
}

Long Polling

// Long polling — the legacy pattern, use SSE instead

// Server:
app.get("/api/updates", async (req, res) => {
  const lastEventId = req.query.lastId as string

  // Wait for new data (up to 30 seconds):
  const update = await waitForUpdate(lastEventId, 30_000)

  if (update) {
    res.json({ data: update, id: update.id })
  } else {
    // Timeout — client should reconnect:
    res.status(204).send()
  }
})

// Client:
async function longPoll(lastId: string) {
  while (true) {
    try {
      const res = await fetch(`/api/updates?lastId=${lastId}`)

      if (res.status === 200) {
        const { data, id } = await res.json()
        processUpdate(data)
        lastId = id
      }
      // 204: timeout, immediately reconnect
    } catch {
      // Error: wait before retry
      await new Promise((r) => setTimeout(r, 5000))
    }
  }
}

// Why SSE is better than long polling:
// - SSE maintains one connection; long polling creates a new request for every event
// - SSE events arrive immediately; long polling has ~100ms overhead per poll
// - SSE is built into browsers with auto-reconnect; long polling requires custom code
// - Long polling creates unnecessary server load at scale

Feature Comparison

FeatureSSEWebSocketLong Polling
DirectionServer → ClientBi-directionalServer → Client
ProtocolHTTPws:// / wss://HTTP
Auto-reconnect✅ Built-in❌ Manual❌ Manual
Custom headers❌ (EventSource)
Binary data❌ (text only)
Proxy/firewall⚠️ Sometimes blocked
Edge runtime❌ (mostly)
Server complexityLowHighMedium
Browser support✅ All✅ All✅ All
AI streaming✅ Standard

When to Use Each

Choose SSE if:

  • Server pushing updates to clients (notifications, live feeds, analytics dashboards)
  • AI/LLM streaming responses — this is the industry standard
  • You need edge runtime compatibility (Vercel Edge, Cloudflare Workers)
  • Clients don't need to send frequent messages back
  • You want automatic reconnection for free

Choose WebSocket if:

  • True bi-directional communication where the client sends lots of messages
  • Real-time collaboration (Google Docs-style concurrent editing)
  • Multiplayer games with frequent client→server updates (player position, inputs)
  • Chat apps where typing indicators and message sending are frequent
  • Binary data streaming (audio, video frames)

Choose long polling if:

  • Supporting environments where SSE is unreliable (very old proxies, IE11 — rare in 2026)
  • You're maintaining legacy code and can't change the protocol
  • Third-party constraints prevent using SSE or WebSocket

The 2026 recommendation:

Need real-time data? Start with SSE.
Does the CLIENT need to send frequent data too? Use WebSocket.
Are you building an AI chat interface? SSE (via Vercel AI SDK or similar).
Legacy browser support concerns? Still probably SSE (IE11 is gone in 2026).

Production Infrastructure Considerations for SSE

Server-Sent Events look deceptively simple but require careful infrastructure configuration in production. Load balancers and reverse proxies (nginx, AWS ALB, Cloudflare) typically have request timeout settings that terminate long-lived connections — you must configure these timeouts to be much longer than your keepalive interval, or disabled entirely for streaming endpoints. Nginx requires proxy_buffering off and proxy_read_timeout 3600s for SSE routes. AWS ALB has a maximum idle timeout of 4000 seconds, so send keepalive comments (:\n\n) more frequently than that interval. Another production concern: SSE connections consume one of the browser's six per-origin HTTP/1.1 connection slots. On HTTP/2, this limit disappears since HTTP/2 multiplexes streams over a single connection — making SSE dramatically more scalable on modern infrastructure.

WebSocket Scaling Architecture

WebSockets in horizontally scaled deployments require sticky sessions or a pub/sub backbone. Sticky sessions (also called session affinity) pin each WebSocket connection to a specific server instance — simple to configure at the load balancer but problematic when instances restart or scale down. The more resilient approach uses a message broker (Redis pub/sub, NATS, or Socket.io's Redis adapter) to relay messages across instances: any server can receive a message from the client and publish it to the broker, where all other servers subscribe and forward it to their connected clients. For managed WebSocket infrastructure, services like Ably, Pusher, and Upstash Kafka handle the pub/sub complexity so you don't manage it. The ws library itself is single-server; Socket.io wraps ws with the Redis adapter built in, which is why many teams reach for Socket.io when they need horizontal scaling.

AI Streaming: Why SSE Won

The 2024-2026 explosion of AI chat interfaces has made SSE the most discussed real-time protocol in frontend development. OpenAI's API, Anthropic's API, and virtually every LLM provider streams responses using the text/event-stream content type. The Vercel AI SDK, LangChain.js, and similar frameworks all abstract SSE behind helper functions but the protocol underneath is always SSE. The reason AI streaming uses SSE rather than WebSockets is operational simplicity: streaming a text response from server to client is inherently one-directional. The user sends one HTTP POST with their message, and the server streams back tokens as they're generated. This request/response model fits HTTP perfectly — no upgrade handshake, no persistent connection management, no reconnection logic in the server. The client-side EventSource API reconnects automatically if the connection drops mid-stream, which means partial responses can be resumed with the Last-Event-ID header.

Long Polling Performance Characteristics

Long polling's resource consumption profile is more nuanced than "just use SSE." Each long-poll request occupies a server thread or event loop slot while waiting for data — at 1000 concurrent users with 30-second poll windows, that's 1000 simultaneous held connections per server. Node.js's event-driven architecture handles this reasonably well since most of the wait is async, but it's still more resource-intensive than SSE's single-connection approach. The per-event overhead adds up: each event delivered via long polling incurs a complete HTTP request/response cycle (TCP handshake if not keep-alive, HTTP headers, TLS if applicable, parsing overhead). SSE amortizes these costs across the lifetime of a single connection. In 2026, long polling is essentially legacy except for one specific case: some enterprise environments with aggressive proxy timeouts that terminate connections before SSE keepalives can maintain them.

TypeScript Client Patterns

The browser's built-in EventSource API lacks TypeScript generics, which makes typed SSE client code verbose. A practical pattern is creating a typed SSE client factory that wraps EventSource and provides strongly-typed event callbacks. For fetch-based SSE with custom headers, the pattern involves reading the ReadableStream body and parsing the data: lines manually — this gives you full TypeScript control over the event payload types. Libraries like eventsource-parser on npm handle the protocol parsing for you, letting you focus on transforming the parsed events into typed objects. The WebSocket ws library ships with first-class TypeScript types, and React libraries like @tanstack/query integrate WebSocket subscriptions as invalidation triggers — when a WebSocket message arrives, invalidate the relevant query cache to trigger a refetch.

Choosing the Right Protocol for Your Architecture

The practical decision framework for 2026 is simpler than protocol specifications suggest. Start by asking whether your use case is fundamentally push-based (server sends updates to clients without client requests) or conversational (client and server exchange messages rapidly). Push-based use cases — notifications, live data feeds, analytics dashboards, activity streams, AI response streaming — are all well served by SSE with lower infrastructure cost and complexity than WebSockets. Conversational use cases — collaborative editing, multiplayer games, chat applications with typing indicators — genuinely need WebSockets because the client sends many messages back to the server and the overhead of HTTP requests per message is prohibitive. When in doubt, start with SSE: it is easier to add WebSocket functionality later than to explain to your infrastructure team why you're running a stateful WebSocket cluster when SSE would have served your needs.

Connection Limits and Browser Behavior

An important practical constraint for SSE: browsers enforce a limit of six concurrent connections per origin under HTTP/1.1, and each SSE EventSource counts as one connection. If a user opens your app in multiple browser tabs, each tab establishes its own SSE connection. With six tabs open, the seventh tab will queue its SSE connection until another closes. HTTP/2 solves this completely since multiple SSE streams multiplex over a single TCP connection — configure your server (nginx, Caddy, Node.js HTTP/2) to serve over HTTP/2 in production to avoid this tab count limitation. WebSockets have no equivalent browser-imposed limit since each WebSocket connection is a separate TCP connection that browsers do not restrict in the same way. For applications where users commonly have many tabs open (dashboards, developer tools), this HTTP/1.1 SSE connection limit is a concrete reason to prefer WebSockets or ensure HTTP/2 is enabled end-to-end including through any reverse proxies.

Compare real-time and API packages on PkgPulse →

See also: simple-peer vs PeerJS vs mediasoup and pm2 vs node:cluster vs tsx watch, better-sqlite3 vs libsql vs sql.js.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.