ai vs langchain: Which AI SDK Should You Use in 2026
The npm package named ai has 2.8 million weekly downloads. The langchain package has 1.3 million. These two numbers tell an incomplete story — because they represent fundamentally different positions on the complexity spectrum of AI development. One is a polished library for common patterns; the other is a framework for the problems that libraries can't solve.
TL;DR
The ai package (Vercel AI SDK) is the right default for most JavaScript AI applications in 2026 — especially anything with a React or Next.js frontend. The langchain package (and its ecosystem of @langchain/core, @langchain/openai, etc.) is the right choice when you're building complex RAG systems, stateful agents, or need LangChain's 200+ integrations. For many teams, the answer is both.
Key Takeaways
aipackage: 2.8M weekly downloads, 38K GitHub stars, Vercel-backed, React-firstlangchainpackage: 1.3M weekly downloads, but@langchain/corehas 28M monthly users across ecosystemaiprovidesgenerateText,streamText,generateObject,useChat,useCompletion— covers 80% of use cases- LangChain provides chains, agents, RAG primitives, memory, 200+ integrations — covers the remaining 20%
- AI SDK bundle: ~15-60 kB gzipped; LangChain bundle: ~380 kB (full), ~101 kB (@langchain/core)
- AI SDK natively supports edge runtimes; LangChain does not
@ai-sdk/langchainbridge package makes them interoperable
Package Names and Ecosystem
First, let's clarify the package landscape since it's confusing:
Vercel AI SDK:
ai— Core SDK with React hooks and provider-agnostic API@ai-sdk/openai— OpenAI provider@ai-sdk/anthropic— Anthropic provider@ai-sdk/google— Google Generative AI provider@ai-sdk/langchain— LangChain bridge
LangChain JavaScript:
langchain— High-level chains, document loaders, text splitters@langchain/core— Base types, interfaces, LCEL@langchain/openai— OpenAI integration@langchain/community— Community integrations (vector stores, loaders, etc.)@langchain/langgraph— Agent orchestration framework
The Core Difference: Level of Abstraction
This is the key insight that determines which package to use:
ai package is at the API abstraction level: it standardizes how you call LLM APIs, handles streaming, and provides React primitives. It doesn't care about document processing, memory management, or agent orchestration.
langchain is at the application pattern level: it provides pre-built implementations of common AI application patterns — RAG pipelines, conversational memory, tool-using agents, document processing chains. It assumes you'll need these things and builds opinionated scaffolding.
Side-by-Side: Common Tasks
Task 1: Simple chat with streaming
ai package:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await streamText({
model: openai('gpt-4o'),
messages: [{ role: 'user', content: 'Hello!' }],
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
langchain:
import { ChatOpenAI } from '@langchain/openai';
import { HumanMessage } from '@langchain/core/messages';
const model = new ChatOpenAI({ streaming: true });
const stream = await model.stream([new HumanMessage('Hello!')]);
for await (const chunk of stream) {
process.stdout.write(chunk.content as string);
}
Winner for this task: ai — cleaner API, less boilerplate.
Task 2: Structured output extraction
ai package:
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { object } = await generateObject({
model: openai('gpt-4o'),
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string().email(),
}),
prompt: 'Extract contact info from: John Smith, 34, john@example.com',
});
// Fully typed: { name: string, age: number, email: string }
langchain:
import { ChatOpenAI } from '@langchain/openai';
import { z } from 'zod';
const model = new ChatOpenAI({ model: 'gpt-4o' });
const structured = model.withStructuredOutput(
z.object({
name: z.string(),
age: z.number(),
email: z.string().email(),
})
);
const result = await structured.invoke('Extract contact info from: John Smith, 34, john@example.com');
Winner for this task: Roughly equal — both have excellent Zod-based structured output.
Task 3: RAG (Retrieval-Augmented Generation)
ai package:
// You'd need to implement retrieval yourself, or use a vector store client directly
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Pinecone } from '@pinecone-database/pinecone';
const pinecone = new Pinecone();
const index = pinecone.index('my-docs');
const results = await index.query({ vector: queryEmbedding, topK: 5 });
const context = results.matches.map(m => m.metadata.text).join('\n');
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: `Answer based on context:\n${context}\n\nQuestion: ${question}`,
});
langchain:
import { ChatOpenAI } from '@langchain/openai';
import { OpenAIEmbeddings } from '@langchain/openai';
import { PineconeStore } from '@langchain/pinecone';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{ pineconeIndex }
);
const prompt = ChatPromptTemplate.fromMessages([
['system', 'Answer based on context: {context}'],
['human', '{input}'],
]);
const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain: await createStuffDocumentsChain({ llm: new ChatOpenAI(), prompt }),
});
const result = await chain.invoke({ input: question });
Winner for this task: langchain — dramatically less setup code, handles chunking, embedding, and retrieval as first-class concerns.
Task 4: Multi-step agent with tools
ai package:
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const result = await generateText({
model: openai('gpt-4o'),
maxSteps: 10,
tools: {
search: tool({
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => searchWeb(query),
}),
calculate: tool({
parameters: z.object({ expression: z.string() }),
execute: async ({ expression }) => evaluate(expression),
}),
},
prompt: 'What is the market cap of Apple divided by Microsoft?',
});
langchain (with LangGraph):
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { ChatOpenAI } from '@langchain/openai';
import { DynamicTool } from '@langchain/core/tools';
const agent = createReactAgent({
llm: new ChatOpenAI({ model: 'gpt-4o' }),
tools: [
new DynamicTool({ name: 'search', func: searchWeb }),
new DynamicTool({ name: 'calculate', func: evaluate }),
],
});
const result = await agent.invoke({
messages: [{ role: 'user', content: 'What is the market cap of Apple divided by Microsoft?' }],
});
Winner for this task: ai for simple agents; langchain/LangGraph for stateful or multi-agent scenarios.
Feature Matrix
| Feature | ai package | langchain |
|---|---|---|
| Provider-agnostic API | Yes (25+ providers) | Yes (200+ integrations) |
| React hooks | Yes (useChat, useCompletion) | No (manual) |
| Edge runtime | Yes | No |
| Streaming | Yes (first-class) | Yes (via callbacks) |
| Structured output | Yes (generateObject) | Yes (withStructuredOutput) |
| RAG primitives | No | Yes (comprehensive) |
| Document loaders | No | Yes (50+) |
| Text splitters | No | Yes |
| Vector store integration | No | Yes (20+ stores) |
| Conversational memory | Basic | Comprehensive |
| Agent orchestration | Basic (maxSteps) | Full (LangGraph) |
| Observability | No | LangSmith |
| Bundle size | Small | Large |
When the Answer Is Both
Many production systems use both packages in different layers:
Frontend (React/Next.js)
└── ai package (useChat, useCompletion, streaming)
Backend API (Node.js)
└── LangChain (RAG pipeline, document processing, agent orchestration)
└── @ai-sdk/langchain (bridge for streaming results to frontend)
The @ai-sdk/langchain package makes this integration seamless:
import { toAIStream } from '@ai-sdk/langchain';
import { NextResponse } from 'next/server';
// In your Next.js API route
export async function POST(req: Request) {
const { messages } = await req.json();
// Use LangChain for complex processing
const langchainStream = await ragChain.stream({ input: messages.at(-1).content });
// Stream to AI SDK compatible format for useChat hook
return new NextResponse(toAIStream(langchainStream));
}
Decision Framework
Start with ai (Vercel AI SDK) if:
- Building a React or Next.js application
- Your AI features are chat, completion, or structured extraction
- You want to switch providers without rewriting code
- Edge runtime is a requirement
- Team is new to AI development and wants low complexity
- Bundle size matters
Start with langchain if:
- Building a RAG pipeline with document ingestion
- You need conversational memory with built-in persistence
- Multi-step agent orchestration is required (use LangGraph)
- You need LangSmith for observability and debugging
- Migrating from Python LangChain to JavaScript
- You need integrations with 20+ vector databases or 50+ document loaders
Use both if:
- Your app has a real-time chat frontend AND a complex backend processing pipeline
- You want AI SDK's React hooks with LangChain's RAG capabilities
- You need to iterate on frontend quickly while backend complexity grows
Performance in Production
Both packages are used in large-scale production applications. The ai package has an edge in:
- First-response latency (30ms vs 50ms p99 under load)
- Cold start time (no heavy initialization)
- Memory footprint
LangChain has advantages in:
- Throughput for complex pipelines (built-in batching, parallelism)
- Cache efficiency (document and result caching)
- Retry and fallback handling
The Verdict
In 2026, the ai package should be your default starting point for JavaScript AI development. It's simpler, faster, has better TypeScript support, and handles the vast majority of AI feature requirements elegantly. Add langchain packages when you hit the ceiling — when you need RAG, complex agents, or observability that ai doesn't provide.
The ecosystem has matured to the point where these two packages are more complementary than competitive, and the bridge package makes integrating them practical.
Compare on PkgPulse
See live download trends and bundle size comparisons for ai vs langchain on PkgPulse.
See the live comparison
View ai vs. langchain on PkgPulse →