Vercel AI SDK vs LangChain.js: Which AI SDK in 2026?
·PkgPulse Team
TL;DR
The Vercel AI SDK (ai) has surpassed LangChain in download velocity and is the default choice for Next.js streaming AI. LangChain still wins for complex agent workflows, RAG pipelines, and teams that need its vast ecosystem of integrations. In 2026, these aren't competitors so much as tools for different jobs — ai for UI-facing streaming and generation, LangChain for multi-step pipelines and agentic workflows that run server-side.
Key Takeaways
ai(Vercel AI SDK): 4.5M+ weekly downloads, streaming-first,useChat/useCompletionhooks, ~20KBlangchain: ~3M weekly downloads, rich ecosystem, agents/chains/memory, ~2MB- Streaming:
aiis purpose-built for it; LangChain added streaming later (less ergonomic) - Tool use / function calling: both support it —
aiviatool(), LangChain viaDynamicTool - Agent loops: LangChain Agents/LangGraph purpose-built;
ai'smaxStepsworks for simpler cases - Bundle size:
aiis 100x smaller — matters for edge functions and client-side code
Download Trends
| Package | Weekly Downloads | 6-Month Trend |
|---|---|---|
ai | ~4.5M | ↑ +85% |
langchain | ~3.1M | → Stable |
@langchain/core | ~2.8M | ↑ +20% |
@langchain/openai | ~1.9M | ↑ +15% |
@langchain/community | ~1.2M | → Stable |
ai growth is primarily from Next.js developers adding streaming chat — LangChain's growth is from enterprise ML/AI teams.
Vercel AI SDK (ai): Streaming-First
Core: Text Generation and Streaming
// npm install ai @ai-sdk/openai
import { streamText, generateText, generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Generate text (non-streaming):
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'Explain React Server Components in one paragraph.',
});
// Stream text (progressive rendering):
const result = await streamText({
model: openai('gpt-4o-mini'),
system: 'You are a helpful coding assistant.',
messages: [{ role: 'user', content: 'Write a Zod schema for a user profile.' }],
});
// Works in Next.js route handler:
return result.toDataStreamResponse();
Structured Output (generateObject)
// Type-safe structured generation with Zod:
const { object } = await generateObject({
model: openai('gpt-4o-mini'),
schema: z.object({
packages: z.array(z.object({
name: z.string(),
purpose: z.string(),
weeklyDownloads: z.number(),
recommendation: z.enum(['use', 'avoid', 'consider']),
})),
summary: z.string(),
}),
prompt: 'Recommend 3 npm packages for date handling in 2026.',
});
// object is fully typed — no JSON.parse, no casting:
console.log(object.packages[0].recommendation); // TypeScript knows this is 'use' | 'avoid' | 'consider'
Tool Use (Function Calling)
import { streamText, tool } from 'ai';
const result = await streamText({
model: openai('gpt-4o'),
tools: {
getPackageInfo: tool({
description: 'Get npm package download stats and health score',
parameters: z.object({
packageName: z.string().describe('The npm package name'),
}),
execute: async ({ packageName }) => {
// Actually call your database/API:
const stats = await fetchPackageStats(packageName);
return {
weeklyDownloads: stats.downloads,
healthScore: stats.health,
latestVersion: stats.version,
};
},
}),
comparePackages: tool({
description: 'Compare two npm packages',
parameters: z.object({
packageA: z.string(),
packageB: z.string(),
}),
execute: async ({ packageA, packageB }) => {
const [a, b] = await Promise.all([
fetchPackageStats(packageA),
fetchPackageStats(packageB),
]);
return { packageA: a, packageB: b };
},
}),
},
maxSteps: 5, // Allow up to 5 tool calls in an agentic loop
prompt: 'Compare React vs Vue download trends and health scores.',
});
React Hooks
// app/components/ai-chat.tsx
'use client';
import { useChat } from 'ai/react';
export function AiChat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, stop } = useChat({
api: '/api/ai/chat',
});
return (
<div>
{messages.map((m) => (
<div key={m.id} className={m.role === 'user' ? 'text-blue-600' : 'text-gray-900'}>
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit" disabled={isLoading}>Send</button>
{isLoading && <button type="button" onClick={stop}>Stop</button>}
</form>
</div>
);
}
LangChain.js: Ecosystem and Agent Workflows
Chains and Pipelines
// npm install langchain @langchain/openai @langchain/core
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence } from '@langchain/core/runnables';
const model = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 });
// Build a chain:
const chain = RunnableSequence.from([
PromptTemplate.fromTemplate(
'Summarize this npm package in 2 sentences: {packageName}\n\nDescription: {description}'
),
model,
new StringOutputParser(),
]);
const summary = await chain.invoke({
packageName: 'react',
description: 'A JavaScript library for building user interfaces',
});
RAG Pipeline (Retrieval-Augmented Generation)
import { OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { Document } from '@langchain/core/documents';
// Build a RAG pipeline:
const docs = [
new Document({ pageContent: 'React is a UI library with 25M weekly downloads...' }),
new Document({ pageContent: 'Vue is a progressive framework with 5M weekly downloads...' }),
];
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 200, chunkOverlap: 20 });
const chunks = await splitter.splitDocuments(docs);
const vectorStore = await MemoryVectorStore.fromDocuments(
chunks,
new OpenAIEmbeddings()
);
const documentChain = await createStuffDocumentsChain({
llm: new ChatOpenAI({ model: 'gpt-4o-mini' }),
prompt: PromptTemplate.fromTemplate(
`Answer based on context:\n{context}\n\nQuestion: {input}`
),
});
const retriever = vectorStore.asRetriever({ k: 3 });
const chain = await createRetrievalChain({ retriever, combineDocsChain: documentChain });
const result = await chain.invoke({ input: 'Which framework should a beginner learn?' });
LangGraph: Complex Agent Workflows
// @langchain/langgraph — for stateful multi-step agents
import { StateGraph, END } from '@langchain/langgraph';
import { ChatOpenAI } from '@langchain/openai';
import { ToolNode } from '@langchain/langgraph/prebuilt';
// Define state:
const graphState = {
messages: { reducer: (x: unknown[], y: unknown[]) => x.concat(y) },
};
const model = new ChatOpenAI({ model: 'gpt-4o' }).bindTools(tools);
const toolNode = new ToolNode(tools);
function shouldContinue({ messages }: { messages: AIMessage[] }) {
const last = messages[messages.length - 1];
return last.tool_calls?.length ? 'tools' : END;
}
const graph = new StateGraph({ channels: graphState })
.addNode('agent', async (state) => ({
messages: [await model.invoke(state.messages)],
}))
.addNode('tools', toolNode)
.addEdge('__start__', 'agent')
.addConditionalEdges('agent', shouldContinue)
.addEdge('tools', 'agent');
const app = graph.compile();
const result = await app.invoke({ messages: [new HumanMessage('Find the top 3 React alternatives')] });
Side-by-Side Comparison
ai (Vercel AI SDK) | LangChain | |
|---|---|---|
| Bundle size | ~20KB | ~2MB+ |
| Streaming | First-class, easy | Supported, less ergonomic |
| React hooks | Built-in (useChat, useCompletion) | Third-party or manual |
| Structured output | generateObject + Zod | Output parsers + schemas |
| RAG pipeline | Manual (use any vector DB) | Built-in chains |
| Agent workflows | maxSteps for simple cases | LangGraph for complex |
| Provider support | 25+ via @ai-sdk/* | 50+ integrations |
| TypeScript | Excellent | Good |
| Next.js integration | Purpose-built | Works but not optimized |
| Learning curve | Low | Medium-High |
When to Choose Each
Choose `ai` (Vercel AI SDK) if:
→ Building streaming chat UI in Next.js
→ Need structured JSON output with type safety
→ Building on edge/serverless (bundle size matters)
→ Simple to medium complexity AI features
→ Want first-class React hooks
Choose LangChain if:
→ Building RAG pipelines
→ Complex multi-step agent workflows
→ Need LangGraph for stateful agents
→ Require many specialized integrations (document loaders, vector stores)
→ Team has existing LangChain expertise
Compare ai vs langchain download trends and health scores on PkgPulse.
See the live comparison
View ai vs. langchain on PkgPulse →