Skip to main content

Vercel AI SDK vs OpenAI SDK vs Anthropic SDK: Which to Use in 2026

·PkgPulse Team

Vercel AI SDK vs OpenAI SDK vs Anthropic SDK: Which to Use in 2026

TL;DR

When building AI-powered apps, your SDK choice determines how much abstraction you want. OpenAI SDK is the direct path to OpenAI models — minimal abstraction, maximum control, and works for gpt-4o, o3, and the full OpenAI feature set. Anthropic SDK is the equivalent for Claude — direct access with full support for extended thinking, prompt caching, and the computer use tools. Vercel AI SDK is the provider-agnostic framework — wraps OpenAI, Anthropic, Google, Mistral, and 20+ others in a unified API, with first-class React hooks (useChat, useCompletion) for streaming UIs. Use the provider SDK when you're committed to one provider and want the latest features first. Use Vercel AI SDK when you need multi-provider flexibility or React streaming UI without boilerplate.

Key Takeaways

  • Vercel AI SDK supports 20+ providers — switch models with one line of config change
  • OpenAI SDK npm downloads: 10M+/week — the most-used AI SDK in the JavaScript ecosystem
  • Anthropic SDK added streaming in v0.9 — now feature-parity with OpenAI SDK for streaming
  • Vercel AI SDK useChat hook handles streaming, tool calls, and message history state
  • OpenAI SDK has Zod-based structured outputszodResponseFormat for type-safe JSON
  • Anthropic's prompt caching reduces costs 90% for repeated context — only in the Anthropic SDK
  • Vercel AI SDK generateObject works across providers — no provider-specific structured output config

The AI SDK Landscape

Each SDK serves a different layer:

Use Case                         → Recommended SDK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
OpenAI-only, fine-grained control → openai (official SDK)
Anthropic-only, all Claude features → @anthropic-ai/sdk
Multi-provider / React streaming UI → ai (Vercel AI SDK)
OpenAI-compatible APIs (local models)→ openai with baseURL override

OpenAI SDK: Direct Access to GPT-4 and o3

The official OpenAI Node.js SDK. Ships first with new OpenAI features — structured outputs, assistants, batch API, image generation, speech.

Installation

npm install openai

Chat Completions

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Basic completion
const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is the capital of France?" },
  ],
});

console.log(response.choices[0].message.content);
// "The capital of France is Paris."

Streaming

// Streaming response
const stream = client.chat.completions.stream({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Tell me a story about a robot." }],
});

for await (const chunk of stream) {
  const delta = chunk.choices[0]?.delta?.content ?? "";
  process.stdout.write(delta);
}

const finalMessage = await stream.finalMessage();
console.log("\nTokens used:", finalMessage.usage?.total_tokens);

Structured Outputs with Zod

import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import { z } from "zod";

const client = new OpenAI();

const RecipeSchema = z.object({
  name: z.string(),
  ingredients: z.array(z.object({
    item: z.string(),
    amount: z.string(),
    unit: z.string(),
  })),
  steps: z.array(z.string()),
  prepTimeMinutes: z.number(),
  servings: z.number(),
});

const result = await client.beta.chat.completions.parse({
  model: "gpt-4o",
  messages: [
    { role: "user", content: "Give me a recipe for chocolate chip cookies." },
  ],
  response_format: zodResponseFormat(RecipeSchema, "recipe"),
});

const recipe = result.choices[0].message.parsed;
// recipe is fully typed as z.infer<typeof RecipeSchema>
console.log(recipe?.name);           // "Classic Chocolate Chip Cookies"
console.log(recipe?.prepTimeMinutes); // 20

Tool Calling

const tools: OpenAI.Chat.Completions.ChatCompletionTool[] = [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Get the current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: { type: "string", description: "City name" },
          unit: { type: "string", enum: ["celsius", "fahrenheit"] },
        },
        required: ["location"],
      },
    },
  },
];

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
  tools,
  tool_choice: "auto",
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall?.function.name === "get_weather") {
  const args = JSON.parse(toolCall.function.arguments);
  const weather = await fetchWeather(args.location, args.unit);

  // Continue conversation with tool result
  const followUp = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [
      { role: "user", content: "What's the weather in Tokyo?" },
      response.choices[0].message,
      { role: "tool", tool_call_id: toolCall.id, content: JSON.stringify(weather) },
    ],
  });

  console.log(followUp.choices[0].message.content);
}

Anthropic SDK: Claude with Extended Thinking

The official Anthropic SDK for Claude models. First access to extended thinking, prompt caching, and computer use tools.

Installation

npm install @anthropic-ai/sdk

Basic Chat

import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

const message = await client.messages.create({
  model: "claude-opus-4-5",
  max_tokens: 1024,
  messages: [
    { role: "user", content: "Explain quantum entanglement in simple terms." },
  ],
});

console.log(message.content[0].text);

Streaming

const stream = await client.messages.stream({
  model: "claude-opus-4-5",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Write a haiku about TypeScript." }],
});

for await (const event of stream) {
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
    process.stdout.write(event.delta.text);
  }
}

const finalMessage = await stream.finalMessage();
console.log("\nInput tokens:", finalMessage.usage.input_tokens);
console.log("Output tokens:", finalMessage.usage.output_tokens);

Extended Thinking (Claude's reasoning mode)

// Extended thinking — Claude reasons step-by-step before answering
const response = await client.messages.create({
  model: "claude-opus-4-5",
  max_tokens: 16000,
  thinking: {
    type: "enabled",
    budget_tokens: 10000,  // Max tokens for internal reasoning
  },
  messages: [{
    role: "user",
    content: "Solve this math problem step by step: A bat and ball cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?",
  }],
});

for (const block of response.content) {
  if (block.type === "thinking") {
    console.log("Claude's reasoning:", block.thinking);
  } else if (block.type === "text") {
    console.log("Answer:", block.text);
  }
}

Prompt Caching (90% Cost Reduction for Repeated Context)

// Prompt caching — cache large system prompts or documents
// Reduces cost 90% and latency 85% for cache hits
const response = await client.messages.create({
  model: "claude-opus-4-5",
  max_tokens: 1024,
  system: [
    {
      type: "text",
      text: largeLegalDocument,  // e.g., 50,000 tokens of legal text
      cache_control: { type: "ephemeral" },  // Cache this block
    },
  ],
  messages: [{
    role: "user",
    content: "What are the liability clauses in section 4?",
  }],
});

// Second call with same system prompt hits cache
const response2 = await client.messages.create({
  model: "claude-opus-4-5",
  max_tokens: 1024,
  system: [
    {
      type: "text",
      text: largeLegalDocument,  // Same content → cache hit
      cache_control: { type: "ephemeral" },
    },
  ],
  messages: [{
    role: "user",
    content: "Summarize the indemnification terms.",
  }],
});

// response2.usage.cache_read_input_tokens > 0 when cache hit
console.log("Cache hit tokens:", response2.usage.cache_read_input_tokens);

Vercel AI SDK: Multi-Provider with React Streaming

The Vercel AI SDK is a TypeScript framework for building AI-powered applications with streaming, tool calling, and React hooks — across 20+ providers.

Installation

npm install ai
# Plus the provider you want
npm install @ai-sdk/openai    # OpenAI
npm install @ai-sdk/anthropic  # Anthropic
npm install @ai-sdk/google     # Google Gemini

Generate Text (Any Provider)

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";

// Swap providers by changing one line
const { text } = await generateText({
  model: openai("gpt-4o"),           // OpenAI
  // model: anthropic("claude-opus-4-5"),  // Anthropic
  // model: google("gemini-2.0-flash"),    // Google
  prompt: "What is the capital of France?",
});

console.log(text);

Streaming Text

import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const result = await streamText({
  model: openai("gpt-4o"),
  prompt: "Write a poem about TypeScript.",
  onFinish({ text, usage, finishReason }) {
    console.log("Finish reason:", finishReason);
    console.log("Tokens used:", usage);
  },
});

for await (const textPart of result.textStream) {
  process.stdout.write(textPart);
}

Generate Object (Type-Safe Structured Output)

import { generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const { object } = await generateObject({
  model: openai("gpt-4o"),
  schema: z.object({
    recipe: z.object({
      name: z.string(),
      ingredients: z.array(z.object({
        name: z.string(),
        amount: z.string(),
      })),
      steps: z.array(z.string()),
      calories: z.number(),
    }),
  }),
  prompt: "Generate a healthy breakfast recipe.",
});

console.log(object.recipe.name);  // Fully typed

React Streaming UI with useChat

// app/chat/page.tsx (Next.js App Router)
"use client";

import { useChat } from "ai/react";

export default function ChatPage() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat({
      api: "/api/chat",
      initialMessages: [
        { id: "1", role: "assistant", content: "Hi! How can I help you today?" },
      ],
    });

  return (
    <div className="flex flex-col h-screen">
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${message.role === "user" ? "justify-end" : "justify-start"}`}
          >
            <div
              className={`rounded-lg px-4 py-2 max-w-xs ${
                message.role === "user"
                  ? "bg-blue-500 text-white"
                  : "bg-gray-100 text-gray-900"
              }`}
            >
              {message.content}
            </div>
          </div>
        ))}
        {isLoading && (
          <div className="flex justify-start">
            <div className="bg-gray-100 rounded-lg px-4 py-2">
              <span className="animate-pulse">Thinking...</span>
            </div>
          </div>
        )}
      </div>
      <form onSubmit={handleSubmit} className="p-4 border-t flex gap-2">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Type a message..."
          className="flex-1 border rounded px-3 py-2"
        />
        <button type="submit" className="bg-blue-500 text-white px-4 py-2 rounded">
          Send
        </button>
      </form>
    </div>
  );
}
// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

export async function POST(request: Request) {
  const { messages } = await request.json();

  const result = streamText({
    model: openai("gpt-4o"),
    system: "You are a helpful assistant.",
    messages,
  });

  return result.toDataStreamResponse();
}

Tool Calling (Cross-Provider)

import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const result = await generateText({
  model: openai("gpt-4o"),
  tools: {
    getWeather: tool({
      description: "Get the current weather for a city",
      parameters: z.object({
        city: z.string().describe("The city name"),
        unit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
      }),
      execute: async ({ city, unit }) => {
        // Your actual weather API call here
        return { temperature: 22, condition: "sunny", unit };
      },
    }),
    searchWeb: tool({
      description: "Search the web for information",
      parameters: z.object({
        query: z.string().describe("Search query"),
      }),
      execute: async ({ query }) => {
        // Your search implementation here
        return { results: [] };
      },
    }),
  },
  maxSteps: 5,  // Allow up to 5 tool-call + response cycles
  prompt: "What's the weather in Tokyo and Paris?",
});

// result.steps contains the full tool call / response chain
console.log(result.text);

Feature Comparison

FeatureOpenAI SDKAnthropic SDKVercel AI SDK
Provider supportOpenAI onlyAnthropic only20+ providers
React hooks✅ useChat, useCompletion
Streaming
Structured outputs✅ zodResponseFormat❌ (manual)✅ generateObject
Extended thinking❌ (o3 reasoning)Via Anthropic provider
Prompt caching❌ (pass-through)
Tool calling✅ Cross-provider
Embeddings✅ (via providers)
Image generation✅ DALL-EPartial
Edge/Serverless
Multi-step agentsAssistants API✅ maxSteps
npm downloads/week10M+500k+800k+
TypeScript

When to Use Each

Choose OpenAI SDK if:

  • You're exclusively using OpenAI models and want the latest features immediately
  • Assistants API, batch processing, or DALL-E image generation are needed
  • You want the largest community and ecosystem of examples
  • Provider lock-in is acceptable (you're not planning to switch)

Choose Anthropic SDK if:

  • Claude is your primary or only model and you need every feature (extended thinking, prompt caching, computer use)
  • You're building document processing pipelines where prompt caching saves significant cost
  • You need the latest Claude capabilities before they're wrapped by third-party SDKs

Choose Vercel AI SDK if:

  • You're building React/Next.js apps and want useChat streaming UI without boilerplate
  • Provider flexibility matters — you might switch or use multiple providers
  • generateObject with Zod schemas is cleaner than provider-specific structured output APIs
  • You want a unified API for tool calling that works across OpenAI, Anthropic, and Google

Methodology

Data sourced from npm download statistics (npmjs.com, January 2026), GitHub repositories (star counts as of February 2026), official documentation for all three SDKs, and developer surveys on X/Twitter and the TypeScript Discord. Feature comparisons verified against the latest SDK documentation (Vercel AI SDK v4, openai-node v4, @anthropic-ai/sdk v0.37+).


Related: Mastra vs LangChain.js vs GenKit for AI agent frameworks that build on top of these SDKs, or Langfuse vs LangSmith vs Helicone for observability and tracing.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.