Model Context Protocol (MCP) Libraries for Node.js 2026
TL;DR
Model Context Protocol (MCP) is Anthropic's open standard for connecting AI models to data sources — and it's growing fast. In 2026, MCP lets you expose tools, resources, and prompts to any MCP-compatible AI client (Claude Desktop, Cursor, your own apps). The official @modelcontextprotocol/sdk is the foundation; FastMCP (fastmcp) adds a developer-friendly layer on top. Building an MCP server is genuinely useful when you want your tools usable across multiple AI contexts — not just in your app, but in Claude Desktop, Cursor, and future clients.
Key Takeaways
@modelcontextprotocol/sdk: official Anthropic SDK, stdio + SSE transports, ~200K weekly downloads and growing fastfastmcp: developer-friendly wrapper — less boilerplate, Zod schemas, built-in authentication- MCP vs tool calling: tool calling is per-request; MCP servers are persistent, reusable across clients
- Three primitives: Tools (actions), Resources (readable data), Prompts (reusable templates)
- Transport: stdio (local processes), SSE/WebSocket (remote servers), HTTP (stateless)
- Use cases: database query tools, file system access, API wrappers, knowledge base search
MCP Download Trends
| Package | Weekly Downloads | Trend |
|---|---|---|
@modelcontextprotocol/sdk | ~210K | ↑ Growing fast |
fastmcp | ~45K | ↑ Growing |
@anthropic-ai/sdk | ~800K | ↑ Growing |
MCP is early-stage but growing rapidly as Claude Desktop and Cursor adoption increases.
What is MCP?
MCP (Model Context Protocol) is a standard that lets AI models connect to external data sources and tools. Instead of hardcoding API calls inside your app, you build an MCP server that exposes:
- Tools: callable functions (like API calls, database queries)
- Resources: readable data (files, documents, database records)
- Prompts: reusable prompt templates with parameters
Any MCP-compatible client can then use your server. Build once, use everywhere.
The analogy that makes MCP click: think of it as the USB standard for AI tools. Before USB, every peripheral had a proprietary connector — keyboards used one port, mice used another, printers used a third. USB replaced all of them with one universal interface. MCP does the same for AI tools. Before MCP, every AI assistant had its own plugin system. A Jira integration written for Claude's plugins didn't work in Cursor. A database connector for Cline didn't work in VS Code Copilot. MCP replaces that fragmentation with a single protocol: write one MCP server, and it works across Claude Desktop, Cursor, Cline, Continue.dev, and any other MCP-compatible client.
The three primitives — Tools, Resources, Prompts — cover an enormous surface area of what developers want to give AI models. Tools are callable functions the model can invoke, receiving structured arguments and returning text. Resources are readable data sources the model can pull from, identified by URIs. Prompts are reusable templates that can be parameterized and invoked by name. These three abstractions, simple as they are, describe virtually every form of AI-to-external-system integration.
MCP Transport Options
Understanding MCP transport is essential for deploying servers correctly. The transport layer determines how MCP clients communicate with your server — and the right choice depends on where your server runs and who accesses it.
stdio transport is the standard for local MCP servers. When Claude Desktop or Cursor launches a local MCP server, it starts the server as a child process and communicates over stdin/stdout. This is the simplest deployment model: no networking, no authentication required, and the server has access to the local filesystem and environment. The server process runs while the client is active and terminates when the client exits. All the Claude Desktop config examples use stdio transport.
Streamable HTTP transport (MCP v2 standard as of early 2026) is for remote MCP servers accessible over the network. Instead of stdin/stdout, the server receives requests at an HTTP endpoint and responds with server-sent events or direct HTTP responses. This is what you use when your server needs to be shared by multiple users, when it runs in a cloud environment, or when it accesses resources that aren't available locally. The trade-off is added complexity: you need to handle sessions, authentication, and potentially rate limiting.
The practical rule: if your server is a developer tool for your own machine (file system access, local database, personal API wrapper), use stdio. If it's a shared service that multiple team members or users will connect to, use Streamable HTTP.
Resources vs Tools: When to Use Each Primitive
The decision between registering something as a Tool vs a Resource in your MCP server affects how AI models interact with it and what clients can do with it.
Tools are the right choice when the operation involves computation, mutation, or non-deterministic results. Fetching live npm download statistics is a Tool because the result changes over time. Creating a database record is a Tool. Running a code analysis is a Tool. Tools are called with specific arguments and return a result for that call.
Resources are the right choice for data that can be identified by a stable URI and read multiple times with the same result — or at least data that's conceptually "document-like." A list of trending packages that refreshes weekly is a Resource. A database schema is a Resource. Your project's README file is a Resource. MCP clients can list available resources, let the user select them, and add their content to the model's context without the model needing to explicitly call a function.
The practical difference: the model decides when and how to call Tools as part of its reasoning process. Resources are typically surfaced to users as attachments or context that can be explicitly added to conversations. For a data connector like a package database, trending packages make sense as a Resource (browsable list), while package health comparisons make sense as a Tool (requires arguments, returns computed analysis).
Prompts are reusable templates — most useful when your domain has standardized patterns of interaction that you want to make easy to invoke by name. A code review prompt with language-specific rules, a package comparison report template, or a changelog generation prompt are all good candidates.
Official SDK: @modelcontextprotocol/sdk
The official TypeScript SDK is the foundation for any serious MCP implementation. It handles the protocol wire format, request/response routing, and type safety for all MCP message types. While FastMCP provides a higher-level API, understanding the official SDK is valuable for custom requirements and for understanding what's happening under the hood in any MCP server.
Building a Basic MCP Server
Official SDK: @modelcontextprotocol/sdk
Building a Basic MCP Server
// npm install @modelcontextprotocol/sdk zod
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { z } from 'zod';
const server = new Server(
{
name: 'pkgpulse-mcp',
version: '1.0.0',
},
{
capabilities: {
tools: {},
resources: {},
},
}
);
// Register available tools:
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: 'get_package_stats',
description: 'Get npm package download stats, health score, and metadata',
inputSchema: {
type: 'object',
properties: {
packageName: { type: 'string', description: 'npm package name (e.g., "react")' },
},
required: ['packageName'],
},
},
{
name: 'compare_packages',
description: 'Compare two npm packages by downloads, bundle size, and health score',
inputSchema: {
type: 'object',
properties: {
packageA: { type: 'string' },
packageB: { type: 'string' },
},
required: ['packageA', 'packageB'],
},
},
],
}));
// Handle tool calls:
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case 'get_package_stats': {
const { packageName } = args as { packageName: string };
// Fetch from npm API:
const [downloads, metadata] = await Promise.all([
fetch(`https://api.npmjs.org/downloads/point/last-week/${packageName}`).then(r => r.json()),
fetch(`https://registry.npmjs.org/${packageName}`).then(r => r.json()),
]);
return {
content: [
{
type: 'text',
text: JSON.stringify({
name: packageName,
weeklyDownloads: downloads.downloads,
latestVersion: metadata['dist-tags']?.latest,
description: metadata.description,
license: metadata.license,
lastPublished: metadata.time?.[metadata['dist-tags']?.latest],
}, null, 2),
},
],
};
}
case 'compare_packages': {
const { packageA, packageB } = args as { packageA: string; packageB: string };
const [statsA, statsB] = await Promise.all([
fetchPackageStats(packageA),
fetchPackageStats(packageB),
]);
return {
content: [
{
type: 'text',
text: JSON.stringify({ packageA: statsA, packageB: statsB }, null, 2),
},
],
};
}
default:
throw new Error(`Unknown tool: ${name}`);
}
});
// Start the server:
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('MCP server running on stdio');
Registering Resources
import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
// Expose database records as readable resources:
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [
{
uri: 'pkgpulse://packages/trending',
name: 'Trending npm Packages',
description: 'Top 50 trending npm packages this week',
mimeType: 'application/json',
},
{
uri: 'pkgpulse://categories',
name: 'Package Categories',
description: 'All package categories with counts',
mimeType: 'application/json',
},
],
}));
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
if (uri === 'pkgpulse://packages/trending') {
const trending = await db.package.findMany({
orderBy: { weeklyDownloadChange: 'desc' },
take: 50,
});
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(trending, null, 2),
},
],
};
}
throw new Error(`Unknown resource: ${uri}`);
});
FastMCP: Developer-Friendly Alternative
FastMCP wraps the official SDK with better ergonomics — Zod schemas, cleaner tool definition, built-in auth. The key ergonomic improvement is using Zod schemas for parameter definition instead of hand-written JSON Schema objects. In the official SDK, you write JSON Schema with type: 'object', properties, and required arrays. In FastMCP, you write z.object({ ... }) and it generates the JSON Schema automatically. For anyone already using Zod for validation (which is most TypeScript projects), this is significantly less verbose.
// npm install fastmcp zod
import { FastMCP } from 'fastmcp';
import { z } from 'zod';
const mcp = new FastMCP({
name: 'pkgpulse-mcp',
version: '1.0.0',
});
// Define tools with Zod schemas (no manual JSON Schema):
mcp.addTool({
name: 'get_package_stats',
description: 'Get npm package download stats and health score',
parameters: z.object({
packageName: z.string().describe('npm package name, e.g. "react"'),
includeBundleSize: z.boolean().default(false),
}),
execute: async ({ packageName, includeBundleSize }) => {
const stats = await fetchPackageStats(packageName);
if (includeBundleSize) {
const bundle = await fetch(`https://bundlephobia.com/api/size?package=${packageName}`)
.then(r => r.json());
return { ...stats, gzipSize: bundle.gzip, parseTime: bundle.assets?.[0]?.js?.parse };
}
return stats;
},
});
mcp.addTool({
name: 'search_packages',
description: 'Search npm packages by keyword',
parameters: z.object({
query: z.string(),
category: z.enum(['framework', 'testing', 'orm', 'ui', 'all']).default('all'),
limit: z.number().min(1).max(20).default(10),
}),
execute: async ({ query, category, limit }) => {
return searchPackages(query, { category, limit });
},
});
// Add resources:
mcp.addResource({
uri: 'pkgpulse://trending',
name: 'Trending Packages',
description: 'This week\'s top trending npm packages',
mimeType: 'application/json',
load: async () => {
const trending = await getTrendingPackages();
return JSON.stringify(trending, null, 2);
},
});
// Start (stdio for local tools, HTTP for remote):
mcp.run({ transport: 'stdio' });
// or: mcp.run({ transport: 'sse', port: 3001 });
Connecting MCP to Claude Desktop
Claude Desktop reads its MCP server configuration from a JSON file in your Application Support directory. Each entry in mcpServers is a server identity, a command to start the server process, optional arguments, and optional environment variables. Claude Desktop starts these processes on launch and restarts them automatically if they exit unexpectedly. On Windows, the config file lives at %APPDATA%\Claude\claude_desktop_config.json.
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"pkgpulse": {
"command": "node",
"args": ["/path/to/pkgpulse-mcp/dist/index.js"],
"env": {
"DATABASE_URL": "postgresql://..."
}
}
}
}
Now Claude Desktop can call your tools directly: "Compare React vs Vue download trends" → Claude calls compare_packages → returns live data.
MCP vs Tool Calling: When to Use Each
MCP and tool calling (Vercel AI SDK, OpenAI function calling, Anthropic's tool use API) both let AI models invoke functions. The difference is architectural: tool calling is per-request and application-scoped, while MCP is a persistent server that any compatible client can connect to.
When you implement tool calling with the Vercel AI SDK, you define tools as part of a specific API route or agent implementation. Those tools exist only in that context — your Next.js app, your API server, your chatbot. Another application or a different AI client cannot use them without duplicating the implementation.
When you build an MCP server, you build it once and any MCP-compatible client can use it. A developer on your team can connect Claude Desktop to your MCP server and use the same tools in their AI-assisted workflows. A different AI tool can use it without any changes on your end. This is the "build once, use everywhere" value proposition.
The practical decision: if your tools are app-specific and you only need them in one context, tool calling is simpler. If you're building something that should be usable from multiple AI contexts — especially developer tools, data connectors, or internal tooling — MCP is the better choice.
Use Tool Calling (Vercel AI SDK / OpenAI function calling) when:
→ Building app-specific features
→ Tools are only needed in your app
→ Simple, stateless tool execution
→ Don't need cross-client reuse
Use MCP when:
→ Want tools available in Claude Desktop, Cursor, etc.
→ Building a data connector (database, API) for AI
→ Team uses multiple AI clients and wants shared tools
→ Building an AI-native product that exposes data
→ Creating developer tools with AI integration
Testing MCP Servers
The MCP ecosystem provides several tools for testing servers during development, and they're worth setting up before you build much functionality.
The MCP Inspector is the official testing tool — a web UI that connects to any MCP server via stdio or HTTP, lists its tools and resources, lets you invoke tools with custom arguments, and shows the raw JSON-RPC messages. Install and run it against your server with:
npx @modelcontextprotocol/inspector node dist/server.js
The Inspector opens at localhost:5173 (or another port if that's taken) and shows all registered tools, resources, and prompts. You can call tools interactively with custom arguments and see the exact response your server returns. This is the fastest way to verify your MCP server is working correctly before connecting it to Claude Desktop.
For automated testing, the pattern is to start your server in stdio mode, create a client in-process, and assert on tool responses:
// server.test.ts
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { InMemoryTransport } from '@modelcontextprotocol/sdk/inMemory.js';
import { createServer } from './server.js';
test('get_package_stats returns download count', async () => {
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair();
const server = createServer();
await server.connect(serverTransport);
const client = new Client({ name: 'test', version: '1.0' }, { capabilities: {} });
await client.connect(clientTransport);
const result = await client.callTool({ name: 'get_package_stats', arguments: { packageName: 'react' } });
const data = JSON.parse(result.content[0].text);
expect(data.weeklyDownloads).toBeGreaterThan(0);
await client.close();
});
The InMemoryTransport creates a linked pair of transports that communicate in-process — no stdio, no HTTP, no ports. This makes MCP server tests fast and reliable.
SDK vs FastMCP: Making the Choice
For most new MCP servers in 2026, FastMCP is the right starting point. Its Zod-based tool definition, cleaner resource API, and built-in development runner (fastmcp dev) reduce the time from zero to a working server significantly. The lower boilerplate also means less code to maintain as your server evolves.
Use the official SDK directly when: your server needs custom session management or advanced lifecycle control; you're integrating MCP into an existing HTTP server with complex routing requirements; or you need protocol-level control over how individual message types are handled. The SDK is also the right choice for building MCP clients, not just servers — the official client API is well-documented and gives you full control over how you connect to and interact with remote MCP servers.
Explore MCP libraries and AI SDK download trends on PkgPulse.
See also: Model Context Protocol (MCP) Libraries for Node.js 2026 and tsx vs ts-node vs Bun: Running TypeScript Directly 2026, AI Development Stack for JavaScript 2026.