Best AI/LLM Libraries for JavaScript in 2026
·PkgPulse Team
TL;DR
Vercel AI SDK for React/Next.js streaming; OpenAI SDK for direct API access; LangChain.js for complex agent pipelines. The Vercel AI SDK (~1.5M weekly downloads) provides React hooks and streaming primitives for chat UIs — useChat, useCompletion with minimal boilerplate. The OpenAI SDK (~4M downloads) is the direct API client with excellent TypeScript types. LangChain.js (~800K) is the heavyweight for chains, agents, and RAG — powerful but complex.
Key Takeaways
- OpenAI SDK: ~4M weekly downloads — official, direct API access, best TypeScript types
- Vercel AI SDK: ~1.5M downloads — React streaming hooks, multi-provider, Next.js-native
- LangChain.js: ~800K downloads — chains, agents, RAG, vector stores, 100+ integrations
- Vercel AI SDK v4 —
streamText,generateObject, tool calling across providers - AI SDK — provider-agnostic: OpenAI, Anthropic, Google, AWS Bedrock, Mistral
Vercel AI SDK (React/Next.js)
// AI SDK — streamText with tool calling
import { streamText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// app/api/chat/route.ts
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: {
getWeather: tool({
description: 'Get weather for a location',
parameters: z.object({
city: z.string().describe('City name'),
unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
}),
execute: async ({ city, unit }) => {
const data = await fetchWeather(city, unit);
return data;
},
}),
searchPackages: tool({
description: 'Search npm packages by keyword',
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => {
return searchNpm(query);
},
}),
},
maxSteps: 5, // Allow multi-step tool use
});
return result.toDataStreamResponse();
}
// AI SDK — useChat React hook
'use client';
import { useChat } from 'ai/react';
export function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
onError: (error) => console.error('Chat error:', error),
});
return (
<div className="flex flex-col h-screen">
<div className="flex-1 overflow-y-auto p-4">
{messages.map(msg => (
<div key={msg.id} className={`mb-4 ${msg.role === 'user' ? 'text-right' : 'text-left'}`}>
<div className={`inline-block p-3 rounded-lg ${
msg.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-200'
}`}>
{msg.content}
</div>
</div>
))}
{isLoading && <div>Thinking...</div>}
</div>
<form onSubmit={handleSubmit} className="p-4 border-t">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask anything..."
className="w-full p-2 border rounded"
disabled={isLoading}
/>
</form>
</div>
);
}
// AI SDK — generateObject (structured output)
import { generateObject } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';
const schema = z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
score: z.number().min(0).max(1),
topics: z.array(z.string()),
summary: z.string().max(200),
});
const { object } = await generateObject({
model: anthropic('claude-3-5-sonnet-20241022'),
schema,
prompt: `Analyze this review: "${userReview}"`,
});
// object is fully typed as z.infer<typeof schema>
console.log(object.sentiment, object.score);
OpenAI SDK (Direct API)
// OpenAI SDK — streaming chat
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Streaming
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain monads in 3 sentences' }],
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content || '';
process.stdout.write(delta); // Stream to stdout
}
// OpenAI SDK — function/tool calling
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What packages are similar to lodash?' }],
tools: [
{
type: 'function',
function: {
name: 'search_packages',
description: 'Search npm packages',
parameters: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search query' },
limit: { type: 'number', default: 5 },
},
required: ['query'],
},
},
},
],
tool_choice: 'auto',
});
const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
const args = JSON.parse(toolCall.function.arguments);
const results = await searchNpm(args.query, args.limit);
// Continue conversation with tool result
const finalResponse = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'What packages are similar to lodash?' },
response.choices[0].message,
{ role: 'tool', content: JSON.stringify(results), tool_call_id: toolCall.id },
],
});
}
LangChain.js (Complex Pipelines)
// LangChain.js — RAG (Retrieval Augmented Generation) pipeline
import { ChatOpenAI } from '@langchain/openai';
import { OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const model = new ChatOpenAI({ model: 'gpt-4o' });
const embeddings = new OpenAIEmbeddings();
// 1. Split documents
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await splitter.createDocuments([yourDocumentText]);
// 2. Create vector store
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
// 3. Create retrieval chain
const prompt = ChatPromptTemplate.fromTemplate(`
Answer based on the context:
Context: {context}
Question: {input}
`);
const documentChain = await createStuffDocumentsChain({ llm: model, prompt });
const retrievalChain = await createRetrievalChain({
combineDocsChain: documentChain,
retriever: vectorStore.asRetriever(),
});
// 4. Query
const result = await retrievalChain.invoke({
input: 'What are the key features?',
});
console.log(result.answer);
When to Choose
| Scenario | Pick |
|---|---|
| React/Next.js chat UI with streaming | Vercel AI SDK |
| Multi-provider switching (OpenAI → Anthropic) | Vercel AI SDK |
| Direct OpenAI API with full control | OpenAI SDK |
| RAG pipeline with vector stores | LangChain.js |
| Complex agent workflows (multi-step) | LangChain.js |
| Type-safe structured output | Vercel AI SDK (generateObject) |
| Edge runtime compatible | Vercel AI SDK |
Compare AI library package health on PkgPulse.
See the live comparison
View ai sdk vs. langchain on PkgPulse →