Vercel AI SDK 5 Migration Guide 2026
Vercel AI SDK 5 Migration Guide 2026
TL;DR
Vercel AI SDK 5 (released July 2025) is a major architectural overhaul. The biggest changes: UIMessage and ModelMessage are now separate types, streaming uses SSE natively (no custom protocol), tools use inputSchema/outputSchema instead of parameters/result, and a new Agent class wraps generateText for agentic loops. Most codebases require 2–4 hours of migration. Automated codemods handle the easy parts.
Key Takeaways
- Released July 31, 2025 — v5 is a stable, production release; v6 followed in late 2025 with further additions
- Two message types:
UIMessage(client state) vsModelMessage(what goes to the LLM) — conversion is now explicit - SSE-first streaming replaces the custom streaming protocol — simpler to debug, native browser support
- New tool API:
inputSchema+outputSchemainstead ofparameters+result - Agent class: Lightweight wrapper around
generateTextwithstopWhenandprepareStepfor agentic loop control - Framework parity: Vue, Svelte, and Angular now have the same hooks as React (
useChat,useCompletion) - Codemod available: Run
npx @ai-sdk/codemod@latest migrateto automate most changes
Why v4 → v5 Is a Breaking Change
Vercel AI SDK v3 and v4 used a custom streaming protocol (StreamingTextResponse, experimental_StreamData) that worked around browser SSE limitations. By 2025, those limitations were gone — all major environments support SSE natively.
v5 rips out the custom protocol entirely and replaces it with standard SSE. This is architecturally cleaner but breaks existing streaming implementations.
The useChat hook's message type also changed significantly. In v4, messages had a single content: string | ContentPart[] shape. In v5, there's a distinction between UIMessage (what your React component stores and renders) and ModelMessage (what gets sent to the LLM). This enables better streaming UI with more complex content types.
Installation
npm install ai@5
The core package is still just ai. Provider packages stay the same:
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google
Migration by Feature
1. Message Types: UIMessage vs ModelMessage
v4:
import { Message } from 'ai';
const [messages, setMessages] = useState<Message[]>([]);
// Message had: id, role, content (string | ContentPart[])
v5:
import { UIMessage } from 'ai';
const [messages, setMessages] = useState<UIMessage[]>([]);
// UIMessage has: id, role, parts (array of content parts)
// ModelMessage is what you send to the LLM (different shape)
Converting between types:
import { convertToModelMessages } from 'ai';
// When calling generateText/streamText from a server action:
const modelMessages = convertToModelMessages(uiMessages);
const result = await streamText({
model: openai('gpt-4o'),
messages: modelMessages,
});
This explicit conversion replaces v4's implicit handling. It adds a line of code but makes it clear where the boundary between client state and LLM input is.
2. Streaming: SSE Replaces Custom Protocol
v4 (route.ts):
import { StreamingTextResponse, streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
});
return new StreamingTextResponse(result.textStream);
}
v5 (route.ts):
import { streamText, convertToModelMessages } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
});
// toDataStreamResponse() returns standard SSE
return result.toDataStreamResponse();
}
Key difference: StreamingTextResponse is gone. Use result.toDataStreamResponse() instead.
v5 (useChat hook — minimal change):
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
});
return (
<div>
{messages.map(m => (
<div key={m.id}>
<strong>{m.role}</strong>
{/* In v5, render m.parts instead of m.content */}
{m.parts.map((part, i) =>
part.type === 'text' ? <p key={i}>{part.text}</p> : null
)}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}
The big UI change: render m.parts instead of m.content. Parts is an array of typed content objects ({ type: 'text', text: '...' }, { type: 'tool-call', ... }, etc.).
3. Tool Calling: New Schema API
v4:
import { tool } from 'ai';
import { z } from 'zod';
const getWeather = tool({
description: 'Get weather for a city',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return { temp: 72, condition: 'sunny' };
},
});
v5:
import { tool } from 'ai';
import { z } from 'zod';
const getWeather = tool({
description: 'Get weather for a city',
inputSchema: z.object({ // was: parameters
city: z.string(),
}),
outputSchema: z.object({ // was: nothing (return type was inferred)
temp: z.number(),
condition: z.string(),
}),
execute: async ({ city }) => {
return { temp: 72, condition: 'sunny' };
},
});
parameters → inputSchema. The addition of explicit outputSchema enables better type safety and allows tools to be used in Arazzo-style workflows where downstream steps consume tool outputs.
Dynamic tools (new in v5):
// Tools can now be defined at runtime without static schema
const dynamicTool = tool({
description: 'Call any endpoint',
inputSchema: getSchemaFromDatabase(), // Dynamic schema
execute: async (input) => { /* ... */ },
});
4. The New Agent Class
v5 introduces a lightweight Agent class for agentic loops — the pattern where you repeatedly call an LLM until it stops using tools:
v4 (manual agentic loop):
let messages = initialMessages;
let shouldContinue = true;
while (shouldContinue) {
const result = await generateText({
model: openai('gpt-4o'),
messages,
tools: myTools,
});
messages = [...messages, ...result.responseMessages];
if (result.finishReason === 'stop' || result.toolCalls.length === 0) {
shouldContinue = false;
}
}
v5 (Agent class):
import { Agent } from 'ai';
const agent = new Agent({
model: openai('gpt-4o'),
tools: myTools,
});
const result = await agent.run(initialMessages, {
stopWhen: (state) => state.toolCalls.length === 0,
prepareStep: (step) => {
// Modify the prompt between steps
return step;
},
});
The Agent class isn't magic — it's the same loop pattern, just packaged. The benefit is that stopWhen and prepareStep make the control flow explicit and testable.
5. Provider Registry (New)
v5 adds a global provider registry so models can be referenced by string:
import { createProviderRegistry, createOpenAI, createAnthropic } from 'ai';
const registry = createProviderRegistry({
openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
anthropic: createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY }),
});
// Reference models by string anywhere in your app
const result = await generateText({
model: registry.languageModel('openai/gpt-4o'),
prompt: 'Hello',
});
This is especially useful for apps that let users choose their LLM provider at runtime.
Using the Codemod
Vercel provides an official codemod for mechanical changes:
npx @ai-sdk/codemod@latest migrate
The codemod handles:
StreamingTextResponse→result.toDataStreamResponse()parameters→inputSchemain tool definitions- Some import path updates
It doesn't handle:
- The
UIMessage/ModelMessagesplit (requires understanding your data flow) - Rendering
m.partsvsm.contentin UI components - Custom streaming protocol implementations
Run the codemod first, then manually address the message type changes.
Migration Checklist
- Run
npx @ai-sdk/codemod@latest migrate - Update
messagesfromMessage[]toUIMessage[] - Add
convertToModelMessages()calls in server routes before passing tostreamText/generateText - Replace
StreamingTextResponsewithresult.toDataStreamResponse() - Update tool definitions:
parameters→inputSchema, addoutputSchemaif needed - Update UI rendering:
m.content→m.parts.map(part => ...) - Replace manual agentic loops with
new Agent()if applicable - Test streaming in browser DevTools (Network tab → check for
text/event-streamcontent type) - Verify tool calls round-trip correctly end-to-end
v5 vs v6: What Changed Next
v6 (released late 2025) added:
ResponseFunctionToolCallOutputItem.outputcan now return arrays of content, not just strings- Realtime API call support
- Additional dev tools from Vercel Ship AI 2025
If you're migrating now, migrate to v5 first, then v6 is mostly additive.
Packages Impacted
| Package | v4 Name | v5 Status |
|---|---|---|
ai | ai | Same name, major version bump |
@ai-sdk/openai | Same | Updated, compatible |
@ai-sdk/anthropic | Same | Updated, compatible |
@ai-sdk/google | Same | Updated, compatible |
ai/react | ai/react | useChat updated (parts vs content) |
ai/svelte | ai/svelte | Now at feature parity with React |
ai/vue | ai/vue | Now at feature parity with React |
Recommendations
Migrate now if:
- You're building a new project — start with v5, don't inherit v4 patterns
- Your app uses tool calling heavily —
inputSchema/outputSchemais strictly better - You want native SSE debugging in browser DevTools
Plan 2–4 hours if:
- You have a working v4 app — the codemod handles 60–70% of mechanical changes; message type updates are manual
- Your UI renders
message.contentin many places — each needs updating tomessage.parts
Methodology
- Sources: Vercel AI SDK v5 announcement (vercel.com/blog/ai-sdk-5, July 2025), official migration guides at ai-sdk.dev, Callstack technical breakdown, VoltAgent explainer
- Date: March 2026
Comparing AI SDK providers? See Vercel AI SDK vs OpenAI SDK vs Anthropic SDK 2026.
Building AI agents? See AI SDK vs LangChain JavaScript 2026 — when the AI SDK's Agent class is enough vs when you need full orchestration.
New to the npm AI ecosystem? See Best AI code generation APIs 2026 on PkgPulse's sister site APIScout.