Skip to main content

How to Add AI Features to Your App: OpenAI vs Anthropic SDK

·PkgPulse Team

TL;DR

Both OpenAI and Anthropic SDKs are excellent — the choice is about model capability, pricing, and your specific use case. OpenAI (GPT-4o) leads on multimodal tasks and ecosystem integrations; Anthropic (Claude 3.5 Sonnet) leads on long context, instruction following, and safety constraints. Both support streaming, tool use, and structured output. Pick OpenAI for embeddings + broad ecosystem; pick Anthropic for long documents and nuanced instruction following.

Key Takeaways

  • OpenAI SDK: 12M weekly downloads, broadest ecosystem, GPT-4o for multimodal
  • Anthropic SDK: 2M weekly downloads, 200K context window, Claude 3.5 Sonnet leads reasoning benchmarks
  • Streaming — both support stream: true / stream option for real-time output
  • Tool use — both support function calling; syntax differs but concepts are identical
  • Cost tip — use GPT-4o-mini or Claude Haiku for high-volume simple tasks (10-50x cheaper)

Setup

# OpenAI
npm install openai

# Anthropic
npm install @anthropic-ai/sdk

# Both: store API key in environment
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

Basic Text Generation

// OpenAI
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Summarize this article in 3 bullets.' },
  ],
  max_tokens: 500,
  temperature: 0.7,
});

console.log(response.choices[0].message.content);
// Token usage
console.log(response.usage); // { prompt_tokens, completion_tokens, total_tokens }
// Anthropic
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

const response = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 500,
  system: 'You are a helpful assistant.',
  messages: [
    { role: 'user', content: 'Summarize this article in 3 bullets.' },
  ],
});

console.log(response.content[0].text);
// Token usage
console.log(response.usage); // { input_tokens, output_tokens }

Streaming Responses

// OpenAI streaming
const stream = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a haiku.' }],
  stream: true,
});

for await (const chunk of stream) {
  const delta = chunk.choices[0]?.delta?.content;
  if (delta) process.stdout.write(delta);
}
// Anthropic streaming
const stream = anthropic.messages.stream({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Write a haiku.' }],
});

stream.on('text', (text) => {
  process.stdout.write(text);
});

const finalMessage = await stream.finalMessage();
console.log('\nTokens used:', finalMessage.usage);
// Both: streaming in Next.js API routes (App Router)
// app/api/chat/route.ts
import { OpenAIStream, StreamingTextResponse } from 'ai';  // Vercel AI SDK
import OpenAI from 'openai';

const openai = new OpenAI();

export async function POST(req: Request) {
  const { messages } = await req.json();

  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    stream: true,
    messages,
  });

  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}
// Vercel AI SDK works with both OpenAI and Anthropic
// npm install ai — unified streaming interface

Tool Use (Function Calling)

// OpenAI tool use
const tools: OpenAI.Tool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather for a city',
      parameters: {
        type: 'object',
        properties: {
          city: { type: 'string', description: 'City name' },
          unit: { type: 'string', enum: ['celsius', 'fahrenheit'] },
        },
        required: ['city'],
      },
    },
  },
];

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What\'s the weather in Tokyo?' }],
  tools,
  tool_choice: 'auto',
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
  const args = JSON.parse(toolCall.function.arguments);
  // args = { city: "Tokyo" }
  const result = await getWeather(args.city);

  // Continue conversation with tool result
  const followUp = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'What\'s the weather in Tokyo?' },
      response.choices[0].message,
      {
        role: 'tool',
        tool_call_id: toolCall.id,
        content: JSON.stringify(result),
      },
    ],
    tools,
  });
}
// Anthropic tool use (same concept, different syntax)
const tools: Anthropic.Tool[] = [
  {
    name: 'get_weather',
    description: 'Get current weather for a city',
    input_schema: {
      type: 'object',
      properties: {
        city: { type: 'string', description: 'City name' },
      },
      required: ['city'],
    },
  },
];

const response = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  tools,
  messages: [{ role: 'user', content: 'What\'s the weather in Tokyo?' }],
});

const toolUse = response.content.find(block => block.type === 'tool_use');
if (toolUse && toolUse.type === 'tool_use') {
  const args = toolUse.input as { city: string };
  const result = await getWeather(args.city);

  // Continue with tool result
  const followUp = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    tools,
    messages: [
      { role: 'user', content: 'What\'s the weather in Tokyo?' },
      { role: 'assistant', content: response.content },
      {
        role: 'user',
        content: [{
          type: 'tool_result',
          tool_use_id: toolUse.id,
          content: JSON.stringify(result),
        }],
      },
    ],
  });
}

Structured Output

// OpenAI: structured output (JSON mode)
import { z } from 'zod';
import { zodResponseFormat } from 'openai/helpers/zod';

const ProductSchema = z.object({
  name: z.string(),
  price: z.number(),
  inStock: z.boolean(),
  tags: z.array(z.string()),
});

const response = await openai.beta.chat.completions.parse({
  model: 'gpt-4o-2024-08-06',  // Must support structured output
  messages: [
    { role: 'user', content: 'Extract product info: Blue Widget, $29.99, available' },
  ],
  response_format: zodResponseFormat(ProductSchema, 'product'),
});

const product = response.choices[0].message.parsed;
// product is typed as z.infer<typeof ProductSchema>
// Anthropic: JSON extraction via prompt
const response = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [{
    role: 'user',
    content: `Extract product info as JSON: Blue Widget, $29.99, available

    Return ONLY valid JSON matching:
    { "name": string, "price": number, "inStock": boolean, "tags": string[] }`,
  }],
});

const json = JSON.parse(response.content[0].text);
// Anthropic follows instructions reliably — this works well in practice

Cost Management

// Model selection by use case + cost
const models = {
  // Simple tasks: 10-50x cheaper
  simple: {
    openai: 'gpt-4o-mini',        // $0.15/1M input tokens
    anthropic: 'claude-3-haiku-20240307',  // $0.25/1M input tokens
  },
  // Complex reasoning:
  complex: {
    openai: 'gpt-4o',             // $2.50/1M input tokens
    anthropic: 'claude-3-5-sonnet-20241022',  // $3.00/1M input tokens
  },
  // Long documents (Anthropic shines):
  longContext: {
    anthropic: 'claude-3-5-sonnet-20241022',  // 200K context
    openai: 'gpt-4o',             // 128K context
  },
};

// Cache frequently-used prompts (Anthropic prompt caching)
const response = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  system: [
    {
      type: 'text',
      text: 'You are a coding assistant. Here is the full codebase...',
      cache_control: { type: 'ephemeral' },  // Cache this expensive prefix
    },
  ],
  messages: [{ role: 'user', content: 'Find the bug in auth.ts' }],
});
// First request: full price. Subsequent requests: 90% discount on cached tokens

When to Choose Which

Use CasePickReason
Multimodal (images, audio, video)OpenAIGPT-4o Vision leads
Long documents (100K+ tokens)Anthropic200K context, better instruction following
Embeddings / vector searchOpenAItext-embedding-3 is unmatched
Strict instruction followingAnthropicClaude is more precise with complex rules
Existing OpenAI integrationOpenAINo migration needed
Agentic workflowsEitherBoth have strong tool use; Anthropic slightly more reliable
High-volume classificationHaiku / MiniBoth cheap models are excellent
Code generationAnthropicClaude 3.5 Sonnet leads HumanEval benchmarks

Compare AI/LLM package health and download trends on PkgPulse.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.