Skip to main content

Guide

LiveKit vs Agora vs 100ms (2026)

LiveKit vs Agora vs 100ms compared for real-time video and audio. WebRTC infrastructure, React/Next.js SDK, recording, multi-party calls, and pricing in 2026.

·PkgPulse Team·
0

TL;DR

Building WebRTC from scratch is a six-month project — ICE candidates, STUN/TURN servers, SFU media routing, and codec negotiation before you write a single line of product code. Real-time video platforms abstract this away. LiveKit is the open-source WebRTC infrastructure platform — a Selective Forwarding Unit (SFU) that you can self-host or use as a cloud service, with SDKs for React, React Native, Python, and Go, and a focus on AI voice agents and real-time media pipelines. Agora is the enterprise-grade platform — 200+ PoPs globally, sub-second latency in challenging network conditions, and the most comprehensive SDK ecosystem (Web, iOS, Android, Windows, Unity, Flutter, React Native). 100ms is the developer-friendly option — clean React SDK, pre-built Room UI components, virtual backgrounds, and recording built-in; designed for quick integration rather than maximum configurability. For open-source / self-hostable WebRTC infrastructure: LiveKit. For global scale and reliability in poor network conditions: Agora. For fastest time to a working video call UI: 100ms.

Key Takeaways

  • LiveKit is open-source — self-host the SFU, or use LiveKit Cloud
  • Agora has 200+ PoPs worldwide — lowest latency in Asia, LatAm, and Africa
  • 100ms pre-built UI components<HMSPrebuilt /> ships a full call UI in one line
  • LiveKit Agents framework — purpose-built for AI voice agents (STT → LLM → TTS pipeline)
  • Agora supports 1 million concurrent channel users — enterprise broadcast at scale
  • 100ms recording — automatic cloud recording with S3 output out of the box
  • LiveKit pricing: $0.002/participant-minute video, $0.00025/participant-minute audio

Use Cases by Provider

Voice AI agents (LLM + STT + TTS)          → LiveKit (Agents framework)
Consumer app: video calls, live streaming    → Agora (global network)
Developer prototype: call UI fast           → 100ms (pre-built components)
Self-hosted WebRTC (no vendor lock-in)      → LiveKit
Enterprise scale (1M concurrent users)      → Agora
Virtual events and webinars                 → Agora or 100ms
React-first application                     → LiveKit or 100ms

LiveKit: Open-Source WebRTC Infrastructure

LiveKit is an open-source SFU server with cloud and self-hosted options. Its React SDK and Agents framework make it a top choice for voice AI applications.

Installation

npm install livekit-client @livekit/components-react

Basic Room Connection

import { Room, RoomEvent, Track } from "livekit-client";

async function joinRoom(url: string, token: string) {
  const room = new Room({
    adaptiveStream: true,
    dynacast: true,        // Automatically reduce quality for invisible participants
    videoCaptureDefaults: {
      resolution: { width: 1280, height: 720, frameRate: 30 },
    },
  });

  // Event listeners
  room.on(RoomEvent.TrackSubscribed, (track, publication, participant) => {
    if (track.kind === Track.Kind.Video) {
      const element = track.attach();  // Returns <video> element
      document.getElementById("remote-video")?.appendChild(element);
    } else if (track.kind === Track.Kind.Audio) {
      track.attach();  // Auto-plays audio
    }
  });

  room.on(RoomEvent.ParticipantConnected, (participant) => {
    console.log("Participant joined:", participant.identity);
  });

  room.on(RoomEvent.Disconnected, () => {
    console.log("Disconnected from room");
  });

  await room.connect(url, token);

  // Enable camera + microphone
  await room.localParticipant.enableCameraAndMicrophone();

  return room;
}

React Components (@livekit/components-react)

import {
  LiveKitRoom,
  VideoConference,
  GridLayout,
  ParticipantTile,
  useTracks,
  ControlBar,
} from "@livekit/components-react";
import "@livekit/components-styles";

// Full video conference with one component
function VideoCallPage({ token }: { token: string }) {
  return (
    <LiveKitRoom
      serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL!}
      token={token}
      connect={true}
      video={true}
      audio={true}
      onDisconnected={() => router.push("/")}
    >
      <VideoConference />  {/* Includes grid, controls, chat */}
    </LiveKitRoom>
  );
}

// Custom layout with granular control
function CustomVideoLayout({ token }: { token: string }) {
  return (
    <LiveKitRoom serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL!} token={token}>
      <RoomContent />
    </LiveKitRoom>
  );
}

function RoomContent() {
  const tracks = useTracks([Track.Source.Camera, Track.Source.ScreenShare]);

  return (
    <div style={{ display: "flex", flexDirection: "column", height: "100vh" }}>
      <GridLayout tracks={tracks}>
        <ParticipantTile />
      </GridLayout>
      <ControlBar />
    </div>
  );
}

Generate Token (Server-Side)

// app/api/livekit/token/route.ts
import { AccessToken } from "livekit-server-sdk";

export async function GET(req: Request) {
  const { searchParams } = new URL(req.url);
  const room = searchParams.get("room") ?? "default-room";
  const username = searchParams.get("username") ?? "anonymous";

  if (!process.env.LIVEKIT_API_KEY || !process.env.LIVEKIT_API_SECRET) {
    return Response.json({ error: "Server misconfigured" }, { status: 500 });
  }

  const at = new AccessToken(
    process.env.LIVEKIT_API_KEY,
    process.env.LIVEKIT_API_SECRET,
    {
      identity: username,
      ttl: "10m",
    }
  );

  at.addGrant({
    roomJoin: true,
    room,
    canPublish: true,
    canSubscribe: true,
    canPublishData: true,
  });

  const token = await at.toJwt();
  return Response.json({ token, url: process.env.NEXT_PUBLIC_LIVEKIT_URL });
}

LiveKit Agents (Voice AI Pipeline)

# LiveKit Agents — Python SDK for AI voice pipelines
# pip install livekit-agents[openai,deepgram]
from livekit import agents
from livekit.agents import AgentSession, Agent, RoomInputOptions
from livekit.plugins import openai, deepgram, silero

async def entrypoint(ctx: agents.JobContext):
    await ctx.connect()

    session = AgentSession(
        stt=deepgram.STT(model="nova-2"),         # Speech-to-text
        llm=openai.LLM(model="gpt-4o-mini"),       # Language model
        tts=openai.TTS(voice="nova"),               # Text-to-speech
        vad=silero.VAD.load(),                      # Voice activity detection
    )

    await session.start(
        room=ctx.room,
        agent=Agent(instructions="You are a helpful customer service agent."),
        room_input_options=RoomInputOptions(noise_cancellation=True),
    )

if __name__ == "__main__":
    agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint))

Agora: Enterprise-Grade Global Network

Agora provides real-time audio and video with 200+ global PoPs and the most extensive platform support (iOS, Android, Web, Windows, Unity, Flutter, React Native).

Installation

npm install agora-rtc-react
# Or plain SDK:
npm install agora-rtc-sdk-ng

React Hook Approach (agora-rtc-react)

import AgoraRTC, { AgoraRTCProvider, useRTCClient, useLocalMicrophoneTrack, useLocalCameraTrack, usePublish, useRemoteUsers } from "agora-rtc-react";

const client = AgoraRTC.createClient({ mode: "rtc", codec: "vp8" });

export function VideoCall({
  appId,
  channel,
  token,
}: {
  appId: string;
  channel: string;
  token: string;
}) {
  return (
    <AgoraRTCProvider client={client}>
      <VideoCallContent appId={appId} channel={channel} token={token} />
    </AgoraRTCProvider>
  );
}

function VideoCallContent({ appId, channel, token }: { appId: string; channel: string; token: string }) {
  const client = useRTCClient();

  const { localMicrophoneTrack } = useLocalMicrophoneTrack(true);
  const { localCameraTrack } = useLocalCameraTrack(true);
  const remoteUsers = useRemoteUsers();

  usePublish([localMicrophoneTrack, localCameraTrack]);

  useEffect(() => {
    client.join(appId, channel, token, null);
    return () => { client.leave(); };
  }, []);

  return (
    <div style={{ display: "flex", flexWrap: "wrap" }}>
      {/* Local video */}
      <LocalVideoTrack track={localCameraTrack} play={true} style={{ width: 300, height: 200 }} />

      {/* Remote videos */}
      {remoteUsers.map((user) => (
        <RemoteUser key={user.uid} user={user} style={{ width: 300, height: 200 }} />
      ))}
    </div>
  );
}

Advanced: Screen Share

import AgoraRTC from "agora-rtc-sdk-ng";

async function startScreenShare(client: IAgoraRTCClient) {
  // Create screen share track
  const screenTrack = await AgoraRTC.createScreenVideoTrack(
    {
      encoderConfig: "1080p_1",
      optimizationMode: "detail",  // "motion" | "detail"
    },
    "auto"  // Also capture system audio if available
  );

  // Publish screen track (stop camera first)
  await client.unpublish(localCameraTrack);
  await client.publish(screenTrack);

  // Stop screen sharing
  screenTrack.on("track-ended", async () => {
    await client.unpublish(screenTrack);
    screenTrack.close();
    await client.publish(localCameraTrack);
  });

  return screenTrack;
}

Server-Side Token Generation

// Agora token generation (server-side)
import { RtcTokenBuilder, RtcRole } from "agora-token";

export function generateAgoraToken(channelName: string, uid: number): string {
  const appId = process.env.AGORA_APP_ID!;
  const appCertificate = process.env.AGORA_APP_CERTIFICATE!;
  const role = RtcRole.PUBLISHER;

  const expirationTime = Math.floor(Date.now() / 1000) + 3600;  // 1 hour

  return RtcTokenBuilder.buildTokenWithUid(
    appId,
    appCertificate,
    channelName,
    uid,
    role,
    expirationTime,
    expirationTime
  );
}

100ms: Fastest Time-to-Working-Video-UI

100ms provides pre-built UI components and React hooks for quickly building video call apps, with automatic recording and virtual backgrounds.

Installation

npm install @100mslive/roomkit-react @100mslive/react-sdk

Pre-Built Conference UI (1 Line)

import { HMSPrebuilt } from "@100mslive/roomkit-react";

// Fully featured video call UI — no configuration needed
function ConferencePage({ roomCode }: { roomCode: string }) {
  return (
    <div style={{ height: "100vh" }}>
      <HMSPrebuilt
        roomCode={roomCode}
        options={{
          userName: "Guest",
          endpoints: {
            tokenByRoomCode: "https://auth.100ms.live/v2/token",
          },
        }}
      />
    </div>
  );
}

Custom Room with @100mslive/react-sdk

import {
  HMSRoomProvider,
  useHMSStore,
  useHMSActions,
  selectLocalPeer,
  selectRemotePeers,
  selectIsLocalAudioEnabled,
  selectIsLocalVideoEnabled,
} from "@100mslive/react-sdk";

function App() {
  return (
    <HMSRoomProvider>
      <VideoRoom />
    </HMSRoomProvider>
  );
}

function VideoRoom() {
  const hmsActions = useHMSActions();
  const localPeer = useHMSStore(selectLocalPeer);
  const remotePeers = useHMSStore(selectRemotePeers);
  const isAudioOn = useHMSStore(selectIsLocalAudioEnabled);
  const isVideoOn = useHMSStore(selectIsLocalVideoEnabled);

  async function joinRoom() {
    const authToken = await fetchToken();  // From 100ms management API
    await hmsActions.join({
      userName: "Alice",
      authToken,
      settings: {
        isAudioMuted: false,
        isVideoMuted: false,
      },
    });
  }

  async function leaveRoom() {
    await hmsActions.leave();
  }

  return (
    <div>
      {localPeer ? (
        <>
          <VideoTile peerId={localPeer.id} isLocal />
          {remotePeers.map((peer) => (
            <VideoTile key={peer.id} peerId={peer.id} />
          ))}
          <div>
            <button onClick={() => hmsActions.setLocalAudioEnabled(!isAudioOn)}>
              {isAudioOn ? "Mute" : "Unmute"}
            </button>
            <button onClick={() => hmsActions.setLocalVideoEnabled(!isVideoOn)}>
              {isVideoOn ? "Stop Video" : "Start Video"}
            </button>
            <button onClick={leaveRoom}>Leave</button>
          </div>
        </>
      ) : (
        <button onClick={joinRoom}>Join Meeting</button>
      )}
    </div>
  );
}

function VideoTile({ peerId, isLocal = false }: { peerId: string; isLocal?: boolean }) {
  const { videoRef } = useVideo({ trackId: peerId });

  return (
    <video
      ref={videoRef}
      autoPlay
      muted={isLocal}
      playsInline
      style={{ width: 300, height: 200, objectFit: "cover" }}
    />
  );
}

Start Recording

// Server-side: start cloud recording
const response = await fetch(
  `https://api.100ms.live/v2/recordings/room/${roomId}/start`,
  {
    method: "POST",
    headers: {
      Authorization: `Bearer ${managementToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      meeting_url: `https://your-app.100ms.live/meeting/${roomCode}`,
      resolution: { width: 1280, height: 720 },
      storage: {
        type: "s3",
        options: {
          region: "us-east-1",
          bucket: "my-recordings",
          access_key: process.env.AWS_ACCESS_KEY_ID!,
          secret_key: process.env.AWS_SECRET_ACCESS_KEY!,
        },
      },
    }),
  }
);

Feature Comparison

FeatureLiveKitAgora100ms
Open-source
Self-hostable
Pre-built UI✅ Basic✅ Full
AI Agents SDK✅ Native
Global PoPs12+ (cloud)200+30+
Max participants500+1,000,000+10,000
React Native SDK
Recording
Virtual background
Screen sharing
Live streaming
Free tier$0 (10k min/month)$0 (10k min/month)$0 (10k min/month)
GitHub stars12k200+500+

When to Use Each

Choose LiveKit if:

  • Building AI voice agents (STT → LLM → TTS pipelines via LiveKit Agents)
  • Open-source is a requirement — self-host the SFU, own your infrastructure
  • React-first application with a modern TypeScript stack
  • Real-time media pipelines (transcription, recording, translation in-session)

Choose Agora if:

  • Global reach with lowest latency in Asia, LatAm, Middle East, and Africa
  • Enterprise scale — 1 million+ concurrent users in a single channel (broadcast)
  • Maximum platform support — Unity, Flutter, Windows, macOS, iOS, Android, Web
  • Regulatory requirements in specific regions (Agora has local data centers)

Choose 100ms if:

  • Fastest time to a working video UI — <HMSPrebuilt /> is one line
  • Automatic cloud recording without additional setup
  • Virtual backgrounds and noise cancellation out of the box
  • Clean developer experience over maximum configurability

Network Reliability and Adaptive Quality

WebRTC quality under poor network conditions is where the real differences between these platforms emerge. All three use SFU (Selective Forwarding Unit) architecture — participants send one media stream to the SFU, which selects the appropriate quality rendition for each subscriber based on their available bandwidth.

Agora's global network is the most resilient to challenging conditions. Their SD-RTN (Software Defined Real-time Network) uses proprietary routing across 200+ PoPs to maintain call quality that standard WebRTC cannot achieve on the public internet alone. In regions with high packet loss or unstable last-mile connections — Southeast Asia, parts of Latin America, rural Africa — Agora consistently outperforms pure WebRTC implementations because packets route through their owned network infrastructure rather than the unpredictable public internet. This is Agora's single most important technical advantage. If your user base is concentrated in regions with unreliable connectivity, Agora's private network provides reliability improvements that are measurable and visible in session quality data.

LiveKit uses standard WebRTC over TURN servers for connectivity. Its Dynacast feature reduces bandwidth usage for participants whose video isn't currently visible — participants off-screen receive a lower-quality or paused stream, which helps in large meetings where only the active speaker is displayed. For users in reliable network conditions (North America, Western Europe, urban Asia), LiveKit's quality matches Agora. For users in challenging conditions, Agora's private routing provides a meaningful edge.

100ms is WebRTC-based and similar to LiveKit in network behavior. Its 30 global PoPs are sufficient for North America and Europe; coverage in other regions is less comprehensive than Agora's 200+. For consumer applications targeting global markets, 100ms's network footprint is a real limitation compared to Agora.


AI Voice Pipelines: The New Battleground

The integration of AI into real-time communications has become a major product pattern in 2025-2026, and LiveKit's positioning here is distinctive compared to the other platforms.

LiveKit Agents is a Python-based framework for building voice AI applications on top of WebRTC rooms. The architecture: a user joins a LiveKit room, an AI agent also joins as a participant, the agent receives the user's audio through STT (Deepgram, AssemblyAI, or Whisper), passes it to an LLM (OpenAI, Anthropic, or local), and speaks the response back through TTS (OpenAI, ElevenLabs, or Cartesia). The pipeline runs as a Python process that participates in the LiveKit room like any other user. Interruption handling — where the user speaks over the AI response — is built into the Voice Activity Detection layer and works correctly without application-level logic. End-to-end latency from user speech to agent response starts at 600ms with hosted providers and can reach below 300ms with optimized local models.

Neither Agora nor 100ms offers an equivalent developer-accessible AI agent framework. Agora's Conversational AI Engine exists but is an enterprise product requiring sales engagement, not a self-serve framework. 100ms does not have a voice AI pipeline product. For teams building voice AI assistants, customer service automation, language tutors, or real-time translation — all of which have seen significant investment through 2025-2026 — LiveKit is the only platform with an open, documented framework for this pattern accessible to individual developers.


Cost Modeling at Scale

Pricing for real-time video scales with usage in non-obvious ways, and the cheapest option at small scale may not be cheapest at 100,000 monthly active users.

All three platforms offer a free tier of 10,000 participant-minutes per month. Beyond that: LiveKit charges $0.002 per participant-minute for video and $0.00025 per participant-minute for audio. The participant-minute calculation is multiplicative — in a 10-person video call lasting 60 minutes, each participant subscribes to 9 other video streams, generating 10 × 9 × 60 = 5,400 participant-video-minutes. A 30-minute daily standup for a 10-person team generates roughly 2,700 participant-minutes, putting monthly costs at around $130 for video plus $16 for audio at LiveKit's rates.

Agora charges $3.99 per 1,000 minutes for video and $0.99 per 1,000 minutes for audio, with significant volume discounts negotiated at $50K+ monthly spend. At low to medium volume, Agora is more expensive than LiveKit. At enterprise volume with negotiated pricing, the gap narrows — often to parity for customers with committed spend.

100ms charges approximately $4 per 1,000 participant-minutes for video. Recording adds $2 per recorded hour. For applications where recording every call is a product requirement, the recording cost is a meaningful additional line item — at 10,000 one-hour sessions per month, recording alone costs $20,000 per month before any volume discount.


Methodology

Data sourced from official LiveKit documentation (docs.livekit.io), Agora documentation (docs.agora.io), 100ms documentation (www.100ms.live/docs), GitHub star counts as of February 2026, pricing pages as of February 2026, and community discussions from the LiveKit Slack, Agora community forums, and r/webdev.


Related: Deepgram vs OpenAI Whisper vs AssemblyAI for the speech-to-text layer used in LiveKit Agents, or ElevenLabs vs OpenAI TTS vs Cartesia for the text-to-speech side of voice AI pipelines.

See also: Mux vs Cloudflare Stream vs Bunny Stream and How to Migrate from Express to Fastify

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.