SSE vs WebSocket vs Long Polling: Real-time Communication in 2026
TL;DR
Server-Sent Events (SSE) are the best choice for one-directional server-to-client streaming — they're built on HTTP, reconnect automatically, work through proxies, and are perfect for notifications, live feeds, and AI streaming responses. WebSockets are the best choice for true bi-directional communication — chat, collaborative editing, multiplayer games, and live cursors. Long polling is the legacy fallback — works everywhere but inefficient. In 2026, SSE covers 80% of real-time use cases that developers reach for WebSockets for, with far less infrastructure complexity.
Key Takeaways
- SSE: Built into browsers, HTTP/1.1+, auto-reconnect, one-way (server → client), proxies ✅
- WebSocket: Full-duplex, separate protocol, no auto-reconnect, firewall/proxy issues
- Long polling: Universal compatibility, but inefficient — a HTTP request per update
- AI streaming (ChatGPT, Vercel AI SDK) uses SSE — not WebSockets
- SSE is free on Vercel Edge Functions and Cloudflare Workers — WebSockets require upgrades
- Use WebSockets only when you need the client to send frequent messages back to the server
Protocol Comparison
Server-Sent Events (SSE):
Client ──── GET /stream ──→ Server
Client ←── data: {...}\n\n── Server (repeated, single long response)
HTTP/1.1 — no protocol upgrade, just a long-lived response
Auto-reconnect: built into EventSource API
WebSocket:
Client ──── GET /ws ──────→ Server (HTTP Upgrade request)
Client ←─ 101 Switching ──→ Server (protocol upgrade)
Client ←──────────────────→ Server (full-duplex binary frames)
No auto-reconnect — must implement yourself
Long Polling:
Client ──── GET /poll ────→ Server
Server holds connection open until data available (or timeout)
Client ←── response ────── Server
Client ──── GET /poll ────→ Server (immediately repeat)
Very inefficient — one request per event
Server-Sent Events
Browser client (EventSource API)
// Browser: built-in EventSource API — no library needed
// Connect to SSE endpoint:
const eventSource = new EventSource("https://api.pkgpulse.com/packages/stream")
// Default event (text/event-stream with no `event:` type):
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data)
console.log("Package update:", data)
}
// Named events (server sends `event: health_update`):
eventSource.addEventListener("health_update", (event) => {
const update = JSON.parse(event.data)
updatePackageCard(update.name, update.score)
})
eventSource.addEventListener("alert", (event) => {
const alert = JSON.parse(event.data)
showNotification(alert.message)
})
// Error handling:
eventSource.onerror = (err) => {
if (eventSource.readyState === EventSource.CLOSED) {
console.log("Connection closed")
}
// EventSource auto-reconnects — you usually don't need to handle errors
}
// Close:
eventSource.close()
SSE with authentication (EventSource doesn't support headers)
// EventSource doesn't support custom headers — use query params or cookies:
// Option 1: token in URL (less secure, but works):
const eventSource = new EventSource(`/api/stream?token=${accessToken}`)
// Option 2: use fetch() with ReadableStream instead of EventSource:
async function connectSSE(onMessage: (data: unknown) => void) {
const response = await fetch("/api/packages/stream", {
headers: {
Authorization: `Bearer ${accessToken}`,
Accept: "text/event-stream",
},
})
const reader = response.body!.getReader()
const decoder = new TextDecoder()
while (true) {
const { done, value } = await reader.read()
if (done) break
const text = decoder.decode(value)
const lines = text.split("\n")
for (const line of lines) {
if (line.startsWith("data: ")) {
const data = JSON.parse(line.slice(6))
onMessage(data)
}
}
}
}
Next.js App Router SSE route handler
// app/api/packages/stream/route.ts
export const runtime = "edge" // Works on edge, not just Node.js
export async function GET(request: Request) {
const { searchParams } = new URL(request.url)
const packageName = searchParams.get("package") ?? "react"
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder()
// Helper to send SSE events:
function send(data: unknown, event?: string) {
let message = ""
if (event) message += `event: ${event}\n`
message += `data: ${JSON.stringify(data)}\n\n`
controller.enqueue(encoder.encode(message))
}
// Send initial data:
const initial = await fetchPackageHealth(packageName)
send(initial, "health_update")
// Poll and send updates every 5 seconds:
const interval = setInterval(async () => {
try {
const update = await fetchPackageHealth(packageName)
send(update, "health_update")
} catch (err) {
send({ error: "Failed to fetch update" }, "error")
clearInterval(interval)
controller.close()
}
}, 5000)
// Clean up when client disconnects:
request.signal.addEventListener("abort", () => {
clearInterval(interval)
controller.close()
})
},
})
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
})
}
AI streaming with SSE (the main use case in 2026)
// AI streaming uses SSE under the hood — this is how Vercel AI SDK works:
// app/api/chat/route.ts
import { StreamingTextResponse, OpenAIStream } from "ai"
import OpenAI from "openai"
const openai = new OpenAI()
export async function POST(request: Request) {
const { messages } = await request.json()
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
stream: true,
})
// OpenAIStream converts the OpenAI stream to a ReadableStream
// StreamingTextResponse sends it as SSE to the client
const stream = OpenAIStream(response)
return new StreamingTextResponse(stream)
}
// Client receives it as SSE and displays it incrementally:
// const { messages } = useChat({ api: "/api/chat" })
WebSocket
Node.js server (ws library)
import { WebSocketServer, WebSocket } from "ws"
import http from "http"
const server = http.createServer()
const wss = new WebSocketServer({ server })
// Track connected clients:
const clients = new Map<string, WebSocket>()
wss.on("connection", (ws, request) => {
const userId = getUserIdFromRequest(request)
clients.set(userId, ws)
console.log(`Client connected: ${userId}`)
// Receive messages from client:
ws.on("message", (data) => {
const message = JSON.parse(data.toString())
switch (message.type) {
case "subscribe_package":
subscribeUserToPackage(userId, message.packageName)
break
case "chat_message":
broadcastToRoom(message.roomId, {
type: "chat_message",
userId,
text: message.text,
timestamp: Date.now(),
})
break
}
})
ws.on("close", () => {
clients.delete(userId)
unsubscribeUser(userId)
console.log(`Client disconnected: ${userId}`)
})
ws.on("error", (err) => {
console.error(`WebSocket error for ${userId}:`, err)
})
// Send welcome message:
ws.send(JSON.stringify({ type: "connected", userId }))
})
function broadcast(data: unknown) {
const message = JSON.stringify(data)
for (const client of clients.values()) {
if (client.readyState === WebSocket.OPEN) {
client.send(message)
}
}
}
server.listen(8080)
Browser client with reconnection
// Browser WebSocket with auto-reconnect (EventSource reconnects automatically,
// WebSocket does NOT — you must implement it):
class ReconnectingWebSocket {
private ws: WebSocket | null = null
private reconnectDelay = 1000
private maxDelay = 30000
constructor(
private url: string,
private onMessage: (data: unknown) => void
) {
this.connect()
}
private connect() {
this.ws = new WebSocket(this.url)
this.ws.onopen = () => {
console.log("WebSocket connected")
this.reconnectDelay = 1000 // Reset delay on successful connection
}
this.ws.onmessage = (event) => {
const data = JSON.parse(event.data)
this.onMessage(data)
}
this.ws.onclose = () => {
console.log(`Disconnected. Reconnecting in ${this.reconnectDelay}ms...`)
setTimeout(() => {
this.reconnectDelay = Math.min(this.reconnectDelay * 2, this.maxDelay)
this.connect()
}, this.reconnectDelay)
}
this.ws.onerror = (err) => {
console.error("WebSocket error:", err)
}
}
send(data: unknown) {
if (this.ws?.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify(data))
}
}
close() {
this.ws?.close()
}
}
Long Polling
// Long polling — the legacy pattern, use SSE instead
// Server:
app.get("/api/updates", async (req, res) => {
const lastEventId = req.query.lastId as string
// Wait for new data (up to 30 seconds):
const update = await waitForUpdate(lastEventId, 30_000)
if (update) {
res.json({ data: update, id: update.id })
} else {
// Timeout — client should reconnect:
res.status(204).send()
}
})
// Client:
async function longPoll(lastId: string) {
while (true) {
try {
const res = await fetch(`/api/updates?lastId=${lastId}`)
if (res.status === 200) {
const { data, id } = await res.json()
processUpdate(data)
lastId = id
}
// 204: timeout, immediately reconnect
} catch {
// Error: wait before retry
await new Promise((r) => setTimeout(r, 5000))
}
}
}
// Why SSE is better than long polling:
// - SSE maintains one connection; long polling creates a new request for every event
// - SSE events arrive immediately; long polling has ~100ms overhead per poll
// - SSE is built into browsers with auto-reconnect; long polling requires custom code
// - Long polling creates unnecessary server load at scale
Feature Comparison
| Feature | SSE | WebSocket | Long Polling |
|---|---|---|---|
| Direction | Server → Client | Bi-directional | Server → Client |
| Protocol | HTTP | ws:// / wss:// | HTTP |
| Auto-reconnect | ✅ Built-in | ❌ Manual | ❌ Manual |
| Custom headers | ❌ (EventSource) | ✅ | ✅ |
| Binary data | ❌ (text only) | ✅ | ✅ |
| Proxy/firewall | ✅ | ⚠️ Sometimes blocked | ✅ |
| Edge runtime | ✅ | ❌ (mostly) | ✅ |
| Server complexity | Low | High | Medium |
| Browser support | ✅ All | ✅ All | ✅ All |
| AI streaming | ✅ Standard | ❌ | ❌ |
When to Use Each
Choose SSE if:
- Server pushing updates to clients (notifications, live feeds, analytics dashboards)
- AI/LLM streaming responses — this is the industry standard
- You need edge runtime compatibility (Vercel Edge, Cloudflare Workers)
- Clients don't need to send frequent messages back
- You want automatic reconnection for free
Choose WebSocket if:
- True bi-directional communication where the client sends lots of messages
- Real-time collaboration (Google Docs-style concurrent editing)
- Multiplayer games with frequent client→server updates (player position, inputs)
- Chat apps where typing indicators and message sending are frequent
- Binary data streaming (audio, video frames)
Choose long polling if:
- Supporting environments where SSE is unreliable (very old proxies, IE11 — rare in 2026)
- You're maintaining legacy code and can't change the protocol
- Third-party constraints prevent using SSE or WebSocket
The 2026 recommendation:
Need real-time data? Start with SSE.
Does the CLIENT need to send frequent data too? Use WebSocket.
Are you building an AI chat interface? SSE (via Vercel AI SDK or similar).
Legacy browser support concerns? Still probably SSE (IE11 is gone in 2026).
Methodology
Feature comparison based on browser EventSource API, ws v8.x WebSocket library, and HTTP/1.1 specification. Edge runtime support based on Vercel and Cloudflare Workers documentation as of February 2026.