Skip to main content

LiveKit vs Agora vs 100ms: Real-Time Video/Audio SDKs 2026

·PkgPulse Team

LiveKit vs Agora vs 100ms: Real-Time Video/Audio SDKs 2026

TL;DR

Building WebRTC from scratch is a six-month project — ICE candidates, STUN/TURN servers, SFU media routing, and codec negotiation before you write a single line of product code. Real-time video platforms abstract this away. LiveKit is the open-source WebRTC infrastructure platform — a Selective Forwarding Unit (SFU) that you can self-host or use as a cloud service, with SDKs for React, React Native, Python, and Go, and a focus on AI voice agents and real-time media pipelines. Agora is the enterprise-grade platform — 200+ PoPs globally, sub-second latency in challenging network conditions, and the most comprehensive SDK ecosystem (Web, iOS, Android, Windows, Unity, Flutter, React Native). 100ms is the developer-friendly option — clean React SDK, pre-built Room UI components, virtual backgrounds, and recording built-in; designed for quick integration rather than maximum configurability. For open-source / self-hostable WebRTC infrastructure: LiveKit. For global scale and reliability in poor network conditions: Agora. For fastest time to a working video call UI: 100ms.

Key Takeaways

  • LiveKit is open-source — self-host the SFU, or use LiveKit Cloud
  • Agora has 200+ PoPs worldwide — lowest latency in Asia, LatAm, and Africa
  • 100ms pre-built UI components<HMSPrebuilt /> ships a full call UI in one line
  • LiveKit Agents framework — purpose-built for AI voice agents (STT → LLM → TTS pipeline)
  • Agora supports 1 million concurrent channel users — enterprise broadcast at scale
  • 100ms recording — automatic cloud recording with S3 output out of the box
  • LiveKit pricing: $0.002/participant-minute video, $0.00025/participant-minute audio

Use Cases by Provider

Voice AI agents (LLM + STT + TTS)          → LiveKit (Agents framework)
Consumer app: video calls, live streaming    → Agora (global network)
Developer prototype: call UI fast           → 100ms (pre-built components)
Self-hosted WebRTC (no vendor lock-in)      → LiveKit
Enterprise scale (1M concurrent users)      → Agora
Virtual events and webinars                 → Agora or 100ms
React-first application                     → LiveKit or 100ms

LiveKit: Open-Source WebRTC Infrastructure

LiveKit is an open-source SFU server with cloud and self-hosted options. Its React SDK and Agents framework make it a top choice for voice AI applications.

Installation

npm install livekit-client @livekit/components-react

Basic Room Connection

import { Room, RoomEvent, Track } from "livekit-client";

async function joinRoom(url: string, token: string) {
  const room = new Room({
    adaptiveStream: true,
    dynacast: true,        // Automatically reduce quality for invisible participants
    videoCaptureDefaults: {
      resolution: { width: 1280, height: 720, frameRate: 30 },
    },
  });

  // Event listeners
  room.on(RoomEvent.TrackSubscribed, (track, publication, participant) => {
    if (track.kind === Track.Kind.Video) {
      const element = track.attach();  // Returns <video> element
      document.getElementById("remote-video")?.appendChild(element);
    } else if (track.kind === Track.Kind.Audio) {
      track.attach();  // Auto-plays audio
    }
  });

  room.on(RoomEvent.ParticipantConnected, (participant) => {
    console.log("Participant joined:", participant.identity);
  });

  room.on(RoomEvent.Disconnected, () => {
    console.log("Disconnected from room");
  });

  await room.connect(url, token);

  // Enable camera + microphone
  await room.localParticipant.enableCameraAndMicrophone();

  return room;
}

React Components (@livekit/components-react)

import {
  LiveKitRoom,
  VideoConference,
  GridLayout,
  ParticipantTile,
  useTracks,
  ControlBar,
} from "@livekit/components-react";
import "@livekit/components-styles";

// Full video conference with one component
function VideoCallPage({ token }: { token: string }) {
  return (
    <LiveKitRoom
      serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL!}
      token={token}
      connect={true}
      video={true}
      audio={true}
      onDisconnected={() => router.push("/")}
    >
      <VideoConference />  {/* Includes grid, controls, chat */}
    </LiveKitRoom>
  );
}

// Custom layout with granular control
function CustomVideoLayout({ token }: { token: string }) {
  return (
    <LiveKitRoom serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL!} token={token}>
      <RoomContent />
    </LiveKitRoom>
  );
}

function RoomContent() {
  const tracks = useTracks([Track.Source.Camera, Track.Source.ScreenShare]);

  return (
    <div style={{ display: "flex", flexDirection: "column", height: "100vh" }}>
      <GridLayout tracks={tracks}>
        <ParticipantTile />
      </GridLayout>
      <ControlBar />
    </div>
  );
}

Generate Token (Server-Side)

// app/api/livekit/token/route.ts
import { AccessToken } from "livekit-server-sdk";

export async function GET(req: Request) {
  const { searchParams } = new URL(req.url);
  const room = searchParams.get("room") ?? "default-room";
  const username = searchParams.get("username") ?? "anonymous";

  if (!process.env.LIVEKIT_API_KEY || !process.env.LIVEKIT_API_SECRET) {
    return Response.json({ error: "Server misconfigured" }, { status: 500 });
  }

  const at = new AccessToken(
    process.env.LIVEKIT_API_KEY,
    process.env.LIVEKIT_API_SECRET,
    {
      identity: username,
      ttl: "10m",
    }
  );

  at.addGrant({
    roomJoin: true,
    room,
    canPublish: true,
    canSubscribe: true,
    canPublishData: true,
  });

  const token = await at.toJwt();
  return Response.json({ token, url: process.env.NEXT_PUBLIC_LIVEKIT_URL });
}

LiveKit Agents (Voice AI Pipeline)

# LiveKit Agents — Python SDK for AI voice pipelines
# pip install livekit-agents[openai,deepgram]
from livekit import agents
from livekit.agents import AgentSession, Agent, RoomInputOptions
from livekit.plugins import openai, deepgram, silero

async def entrypoint(ctx: agents.JobContext):
    await ctx.connect()

    session = AgentSession(
        stt=deepgram.STT(model="nova-2"),         # Speech-to-text
        llm=openai.LLM(model="gpt-4o-mini"),       # Language model
        tts=openai.TTS(voice="nova"),               # Text-to-speech
        vad=silero.VAD.load(),                      # Voice activity detection
    )

    await session.start(
        room=ctx.room,
        agent=Agent(instructions="You are a helpful customer service agent."),
        room_input_options=RoomInputOptions(noise_cancellation=True),
    )

if __name__ == "__main__":
    agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint))

Agora: Enterprise-Grade Global Network

Agora provides real-time audio and video with 200+ global PoPs and the most extensive platform support (iOS, Android, Web, Windows, Unity, Flutter, React Native).

Installation

npm install agora-rtc-react
# Or plain SDK:
npm install agora-rtc-sdk-ng

React Hook Approach (agora-rtc-react)

import AgoraRTC, { AgoraRTCProvider, useRTCClient, useLocalMicrophoneTrack, useLocalCameraTrack, usePublish, useRemoteUsers } from "agora-rtc-react";

const client = AgoraRTC.createClient({ mode: "rtc", codec: "vp8" });

export function VideoCall({
  appId,
  channel,
  token,
}: {
  appId: string;
  channel: string;
  token: string;
}) {
  return (
    <AgoraRTCProvider client={client}>
      <VideoCallContent appId={appId} channel={channel} token={token} />
    </AgoraRTCProvider>
  );
}

function VideoCallContent({ appId, channel, token }: { appId: string; channel: string; token: string }) {
  const client = useRTCClient();

  const { localMicrophoneTrack } = useLocalMicrophoneTrack(true);
  const { localCameraTrack } = useLocalCameraTrack(true);
  const remoteUsers = useRemoteUsers();

  usePublish([localMicrophoneTrack, localCameraTrack]);

  useEffect(() => {
    client.join(appId, channel, token, null);
    return () => { client.leave(); };
  }, []);

  return (
    <div style={{ display: "flex", flexWrap: "wrap" }}>
      {/* Local video */}
      <LocalVideoTrack track={localCameraTrack} play={true} style={{ width: 300, height: 200 }} />

      {/* Remote videos */}
      {remoteUsers.map((user) => (
        <RemoteUser key={user.uid} user={user} style={{ width: 300, height: 200 }} />
      ))}
    </div>
  );
}

Advanced: Screen Share

import AgoraRTC from "agora-rtc-sdk-ng";

async function startScreenShare(client: IAgoraRTCClient) {
  // Create screen share track
  const screenTrack = await AgoraRTC.createScreenVideoTrack(
    {
      encoderConfig: "1080p_1",
      optimizationMode: "detail",  // "motion" | "detail"
    },
    "auto"  // Also capture system audio if available
  );

  // Publish screen track (stop camera first)
  await client.unpublish(localCameraTrack);
  await client.publish(screenTrack);

  // Stop screen sharing
  screenTrack.on("track-ended", async () => {
    await client.unpublish(screenTrack);
    screenTrack.close();
    await client.publish(localCameraTrack);
  });

  return screenTrack;
}

Server-Side Token Generation

// Agora token generation (server-side)
import { RtcTokenBuilder, RtcRole } from "agora-token";

export function generateAgoraToken(channelName: string, uid: number): string {
  const appId = process.env.AGORA_APP_ID!;
  const appCertificate = process.env.AGORA_APP_CERTIFICATE!;
  const role = RtcRole.PUBLISHER;

  const expirationTime = Math.floor(Date.now() / 1000) + 3600;  // 1 hour

  return RtcTokenBuilder.buildTokenWithUid(
    appId,
    appCertificate,
    channelName,
    uid,
    role,
    expirationTime,
    expirationTime
  );
}

100ms: Fastest Time-to-Working-Video-UI

100ms provides pre-built UI components and React hooks for quickly building video call apps, with automatic recording and virtual backgrounds.

Installation

npm install @100mslive/roomkit-react @100mslive/react-sdk

Pre-Built Conference UI (1 Line)

import { HMSPrebuilt } from "@100mslive/roomkit-react";

// Fully featured video call UI — no configuration needed
function ConferencePage({ roomCode }: { roomCode: string }) {
  return (
    <div style={{ height: "100vh" }}>
      <HMSPrebuilt
        roomCode={roomCode}
        options={{
          userName: "Guest",
          endpoints: {
            tokenByRoomCode: "https://auth.100ms.live/v2/token",
          },
        }}
      />
    </div>
  );
}

Custom Room with @100mslive/react-sdk

import {
  HMSRoomProvider,
  useHMSStore,
  useHMSActions,
  selectLocalPeer,
  selectRemotePeers,
  selectIsLocalAudioEnabled,
  selectIsLocalVideoEnabled,
} from "@100mslive/react-sdk";

function App() {
  return (
    <HMSRoomProvider>
      <VideoRoom />
    </HMSRoomProvider>
  );
}

function VideoRoom() {
  const hmsActions = useHMSActions();
  const localPeer = useHMSStore(selectLocalPeer);
  const remotePeers = useHMSStore(selectRemotePeers);
  const isAudioOn = useHMSStore(selectIsLocalAudioEnabled);
  const isVideoOn = useHMSStore(selectIsLocalVideoEnabled);

  async function joinRoom() {
    const authToken = await fetchToken();  // From 100ms management API
    await hmsActions.join({
      userName: "Alice",
      authToken,
      settings: {
        isAudioMuted: false,
        isVideoMuted: false,
      },
    });
  }

  async function leaveRoom() {
    await hmsActions.leave();
  }

  return (
    <div>
      {localPeer ? (
        <>
          <VideoTile peerId={localPeer.id} isLocal />
          {remotePeers.map((peer) => (
            <VideoTile key={peer.id} peerId={peer.id} />
          ))}
          <div>
            <button onClick={() => hmsActions.setLocalAudioEnabled(!isAudioOn)}>
              {isAudioOn ? "Mute" : "Unmute"}
            </button>
            <button onClick={() => hmsActions.setLocalVideoEnabled(!isVideoOn)}>
              {isVideoOn ? "Stop Video" : "Start Video"}
            </button>
            <button onClick={leaveRoom}>Leave</button>
          </div>
        </>
      ) : (
        <button onClick={joinRoom}>Join Meeting</button>
      )}
    </div>
  );
}

function VideoTile({ peerId, isLocal = false }: { peerId: string; isLocal?: boolean }) {
  const { videoRef } = useVideo({ trackId: peerId });

  return (
    <video
      ref={videoRef}
      autoPlay
      muted={isLocal}
      playsInline
      style={{ width: 300, height: 200, objectFit: "cover" }}
    />
  );
}

Start Recording

// Server-side: start cloud recording
const response = await fetch(
  `https://api.100ms.live/v2/recordings/room/${roomId}/start`,
  {
    method: "POST",
    headers: {
      Authorization: `Bearer ${managementToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      meeting_url: `https://your-app.100ms.live/meeting/${roomCode}`,
      resolution: { width: 1280, height: 720 },
      storage: {
        type: "s3",
        options: {
          region: "us-east-1",
          bucket: "my-recordings",
          access_key: process.env.AWS_ACCESS_KEY_ID!,
          secret_key: process.env.AWS_SECRET_ACCESS_KEY!,
        },
      },
    }),
  }
);

Feature Comparison

FeatureLiveKitAgora100ms
Open-source
Self-hostable
Pre-built UI✅ Basic✅ Full
AI Agents SDK✅ Native
Global PoPs12+ (cloud)200+30+
Max participants500+1,000,000+10,000
React Native SDK
Recording
Virtual background
Screen sharing
Live streaming
Free tier$0 (10k min/month)$0 (10k min/month)$0 (10k min/month)
GitHub stars12k200+500+

When to Use Each

Choose LiveKit if:

  • Building AI voice agents (STT → LLM → TTS pipelines via LiveKit Agents)
  • Open-source is a requirement — self-host the SFU, own your infrastructure
  • React-first application with a modern TypeScript stack
  • Real-time media pipelines (transcription, recording, translation in-session)

Choose Agora if:

  • Global reach with lowest latency in Asia, LatAm, Middle East, and Africa
  • Enterprise scale — 1 million+ concurrent users in a single channel (broadcast)
  • Maximum platform support — Unity, Flutter, Windows, macOS, iOS, Android, Web
  • Regulatory requirements in specific regions (Agora has local data centers)

Choose 100ms if:

  • Fastest time to a working video UI — <HMSPrebuilt /> is one line
  • Automatic cloud recording without additional setup
  • Virtual backgrounds and noise cancellation out of the box
  • Clean developer experience over maximum configurability

Methodology

Data sourced from official LiveKit documentation (docs.livekit.io), Agora documentation (docs.agora.io), 100ms documentation (www.100ms.live/docs), GitHub star counts as of February 2026, pricing pages as of February 2026, and community discussions from the LiveKit Slack, Agora community forums, and r/webdev.


Related: Deepgram vs OpenAI Whisper vs AssemblyAI for the speech-to-text layer used in LiveKit Agents, or ElevenLabs vs OpenAI TTS vs Cartesia for the text-to-speech side of voice AI pipelines.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.