LangChain.js vs Vercel AI SDK: Building AI Apps in JavaScript 2026
Vercel AI SDK reduced the code required to build a streaming chat UI in Next.js from over 100 lines to roughly 20. That single statistic explains why it now pulls more than double the weekly npm downloads of LangChain.js — yet LangChain still powers most production RAG pipelines and complex agent workflows. These two frameworks solve different problems, and choosing wrong will cost you weeks of refactoring.
TL;DR
Vercel AI SDK is the fastest path to a production-grade streaming AI UI in React/Next.js with 25+ provider integrations and native edge runtime support. LangChain.js is the right choice when you need complex agent orchestration, document processing pipelines, or a deep ecosystem of integrations. For simple chat or completions, reach for the AI SDK first.
Key Takeaways
- Vercel AI SDK: ~2.8M weekly npm downloads; LangChain.js: ~1.3M weekly downloads (2026)
- AI SDK bundle: ~34-60 kB gzipped per provider; LangChain core: ~101 kB gzipped, blocks edge runtime
- AI SDK supports 25+ LLM providers natively including OpenAI, Anthropic, Google, AWS Bedrock, xAI Grok
- LangChain.js has 200+ integrations, LangGraph for stateful agent orchestration, and mature RAG tooling
- AI SDK 4.x introduced unified
generateText,streamText,generateObjectAPIs with full TypeScript inference - p99 latency under load: Vercel AI SDK ~30ms, LangChain ~50ms for equivalent streaming tasks
- LangChain.js ecosystem includes LangSmith for observability and LangGraph for multi-agent workflows
The State of JavaScript AI Frameworks in 2026
The JavaScript AI framework landscape has consolidated around two clear leaders. A year ago developers had to choose between a dozen competing SDKs; today the decision for most teams is Vercel AI SDK vs LangChain.js, with everything else occupying a distant third place.
What caused this consolidation? Two factors: the complexity of properly handling streaming LLM responses in React, and the explosion of agentic use cases requiring structured orchestration. Both problems required framework-level solutions rather than thin wrappers.
Vercel AI SDK: Deep Dive
What It Is
The Vercel AI SDK (package: ai, plus provider packages like @ai-sdk/openai) is a TypeScript library designed to make streaming AI responses in React and Next.js applications trivially easy. Version 4.x (late 2025) introduced a fully unified provider API that makes switching between LLM providers a one-line change.
Installation
npm install ai @ai-sdk/openai
# or for Anthropic
npm install ai @ai-sdk/anthropic
Core API
import { generateText, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Simple text generation
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'What is the meaning of life?',
});
// Streaming in React (useChat hook)
import { useChat } from 'ai/react';
function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<form onSubmit={handleSubmit}>
{messages.map(m => <div key={m.id}>{m.content}</div>)}
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
);
}
That's a complete streaming chat UI. No manual SSE handling, no useState for loading, no custom fetch logic.
Provider Support (2026)
| Provider | Package |
|---|---|
| OpenAI | @ai-sdk/openai |
| Anthropic | @ai-sdk/anthropic |
| Google Generative AI | @ai-sdk/google |
| AWS Bedrock | @ai-sdk/amazon-bedrock |
| Azure OpenAI | @ai-sdk/azure |
| xAI Grok | @ai-sdk/xai |
| Mistral | @ai-sdk/mistral |
| Cohere | @ai-sdk/cohere |
| ElevenLabs (audio) | @ai-sdk/elevenlabs |
| Deepgram (transcription) | @ai-sdk/deepgram |
Structured Output
AI SDK's generateObject is one of its killer features — it uses Zod schemas to get type-safe structured responses:
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { object } = await generateObject({
model: openai('gpt-4o'),
schema: z.object({
name: z.string(),
sentiment: z.enum(['positive', 'neutral', 'negative']),
score: z.number().min(0).max(10),
}),
prompt: 'Analyze this review: "Great product, fast shipping!"',
});
// object is fully typed: { name: string, sentiment: 'positive'|..., score: number }
Edge Runtime
AI SDK was built edge-first. Every provider package works in Vercel Edge Functions, Cloudflare Workers, and Deno Deploy out of the box. No Node.js-specific APIs are used.
LangChain.js: Deep Dive
What It Is
LangChain.js is a comprehensive framework for building LLM-powered applications with a focus on chaining operations, managing memory, processing documents, and orchestrating agents. It mirrors the Python LangChain API closely, which means Python LangChain knowledge transfers directly.
Installation
npm install langchain @langchain/core @langchain/openai
Core Concepts
LangChain.js is built around composable primitives:
- LLMs and Chat Models: Wrappers around provider APIs
- Prompts: Template management and formatting
- Chains: Sequences of operations (LCEL — LangChain Expression Language)
- Retrievers: Document retrieval for RAG
- Agents: LLMs that can use tools and take multi-step actions
- Memory: Conversation and context persistence
import { ChatOpenAI } from '@langchain/openai';
import { HumanMessage } from '@langchain/core/messages';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const model = new ChatOpenAI({ model: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant that speaks like a pirate.'],
['human', '{input}'],
]);
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const result = await chain.invoke({ input: 'Tell me about TypeScript' });
RAG with LangChain.js
Where LangChain shines brightest:
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
// Split documents
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await splitter.createDocuments([longText]);
// Create vector store
const vectorStore = await MemoryVectorStore.fromDocuments(
docs,
new OpenAIEmbeddings()
);
// Build retrieval chain
const retriever = vectorStore.asRetriever();
const questionAnswerChain = await createStuffDocumentsChain({ llm: model, prompt });
const ragChain = await createRetrievalChain({ retriever, combineDocsChain: questionAnswerChain });
const response = await ragChain.invoke({ input: 'What does the document say about X?' });
This would take 150+ lines to implement from scratch. LangChain reduces it to ~20.
LangGraph: Stateful Agent Orchestration
LangGraph (part of the LangChain ecosystem) is where LangChain really separates from Vercel AI SDK for complex use cases:
import { StateGraph, END } from '@langchain/langgraph';
// Define a multi-step agent with branching logic, human-in-the-loop,
// parallel execution, and persistent state across sessions
const workflow = new StateGraph(AgentState)
.addNode('agent', callModel)
.addNode('tools', toolNode)
.addConditionalEdges('agent', shouldContinue)
.addEdge('tools', 'agent')
.setEntryPoint('agent');
LangGraph handles cycles, parallel branches, checkpointing, and resumable workflows — none of which are available in AI SDK.
Head-to-Head Comparison
| Factor | Vercel AI SDK | LangChain.js |
|---|---|---|
| Weekly npm downloads | ~2.8M | ~1.3M |
| Bundle size (gzipped) | 34-60 kB per provider | ~101 kB core |
| Edge runtime support | Full | Blocked |
| TypeScript quality | Excellent | Good |
| React/Next.js integration | Native (useChat, useCompletion) | Manual |
| RAG tooling | Basic | Comprehensive |
| Agent framework | Tool calling only | LangGraph (full) |
| Observability | None built-in | LangSmith |
| Provider support | 25+ | 200+ |
| Learning curve | Low | High |
| p99 latency | ~30ms | ~50ms |
Performance
Both frameworks ultimately call the same LLM APIs, so latency differences are about overhead. Under 100 concurrent requests:
- Vercel AI SDK: ~30ms p99 — minimal overhead, edge-native
- LangChain.js: ~50ms p99 — chain abstraction and middleware add overhead
For edge functions, LangChain.js is not viable. For Node.js servers, the difference is noticeable but rarely the bottleneck.
When to Use Vercel AI SDK
Choose AI SDK if:
- You're building a React or Next.js application
- You need streaming chat/completions with minimal boilerplate
- Edge runtime is important (Vercel, Cloudflare Workers)
- You want to switch providers without rewriting code
- You need
generateObjectfor structured LLM outputs - Bundle size matters (e.g., client-side AI features)
- Your team is new to AI development
Example use cases: Chatbots, AI-powered forms, content generation tools, streaming responses, document Q&A with simple retrieval.
When to Use LangChain.js
Choose LangChain.js if:
- You need complex RAG pipelines with document processing
- You're building multi-step agents with tools and memory
- You need LangGraph for stateful, multi-agent workflows
- You want built-in observability via LangSmith
- You're porting a Python LangChain project to JavaScript
- You need 200+ integrations (vector stores, document loaders, etc.)
- Human-in-the-loop workflows are required
Example use cases: Enterprise RAG, code analysis agents, research assistants, customer support automation with escalation, multi-modal pipelines.
The New @ai-sdk/langchain Package
An important 2025-2026 development: Vercel released @ai-sdk/langchain, which bridges both ecosystems. You can now use AI SDK's provider abstraction within LangChain chains, or feed LangGraph output streams into AI SDK's React hooks. This means choosing between the two is less binary than it used to be.
import { toAIStream } from '@ai-sdk/langchain';
// Use LangGraph but stream to AI SDK React hooks
const stream = await chain.stream({ input });
return toAIStream(stream); // Compatible with useChat()
Bundle Size Reality Check
If bundle size matters (server-rendered apps with edge functions, or any client-side AI work):
| Package | Minified + gzipped |
|---|---|
ai (core) | 15.2 kB |
@ai-sdk/openai | 18.4 kB |
@ai-sdk/anthropic | 16.1 kB |
langchain | ~380 kB |
@langchain/core | ~101 kB |
@langchain/openai | ~45 kB |
LangChain is a framework — you're paying for the abstractions you get. AI SDK is a library — you pay only for what you use.
GitHub Activity (2026)
| Metric | Vercel AI SDK | LangChain.js |
|---|---|---|
| GitHub stars | ~38K | ~13K (JS repo) |
| Contributors | 250+ | 400+ |
| Release cadence | Monthly | Bi-weekly |
| Issues resolved | Fast | Moderate |
Real-World Recommendation
For 90% of production AI applications, Vercel AI SDK is the right starting point. It handles the hardest parts — streaming, edge compatibility, provider switching, structured output — with minimal configuration. You can always add LangChain.js integrations later via the bridge package.
For agent-heavy or RAG-heavy applications from the start, especially if you anticipate needing LangSmith observability or LangGraph's multi-step orchestration, start with LangChain.js.
The frameworks are increasingly complementary rather than competing. Many production systems use AI SDK for the UI layer and LangChain/LangGraph for the backend orchestration layer.
Try It on PkgPulse
See live npm download trends, bundle size comparisons, and release history for LangChain.js vs Vercel AI SDK on PkgPulse.
See the live comparison
View langchainjs vs. vercel ai sdk on PkgPulse →