Skip to main content

MCP Libraries for Node.js 2026

·PkgPulse Team
0

Anthropic's Model Context Protocol has become the USB standard for AI tool integration — and within 12 months of its release, virtually every major AI development environment (Claude, Cursor, Cline, Continue.dev, VS Code Copilot) adopted it. The npm ecosystem now has a growing set of packages for building MCP servers and clients in Node.js, with the official TypeScript SDK leading the way.

Why MCP Took Off in 2026

The version of AI tooling that existed before MCP was deeply fragmented. Every AI assistant had its own extension mechanism. Claude had its plugins. Cursor had its context window management. Cline had its tool-calling implementation. If you built an integration for one, you built it specifically for that one — there was no reuse. The explosion of AI coding tools in 2024 and 2025 meant that a team wanting to give their AI assistant access to a Jira board, a Postgres database, and an internal API had to write three separate integrations for each tool they used. That's 9 integrations for a team using three AI tools, and the number scales with each new assistant they adopt.

MCP solved this by standardizing the protocol layer. The analogy to REST APIs is apt and worth unpacking. Before REST became the dominant convention for web APIs, every service had its own calling conventions, authentication patterns, and data formats. REST didn't invent anything technically novel — HTTP verbs and status codes existed, JSON existed. What it did was establish shared conventions that let clients and servers built by different teams interoperate without custom negotiation. MCP does the same thing for AI-to-tool communication.

The adoption timeline was faster than almost any developer protocol in recent memory. Anthropic published the initial specification in November 2023. By early 2025, Cursor had shipped native support. By mid-2025, VS Code Copilot and Continue.dev had followed. The MCP v2 spec — which replaced the original SSE-based remote transport with Streamable HTTP — stabilized in early 2026, giving production deployments a reliable foundation. The speed of adoption reflects a genuine unmet need: developers building AI-augmented workflows had been waiting for exactly this kind of standardization.

What made MCP click with developers, beyond the technical merits, was the simplicity of the mental model. Three primitives — Tools, Resources, Prompts — cover an enormous surface area of what developers want to give AI models access to. Tools are callable functions. Resources are readable data. Prompts are reusable templates. You can describe your entire internal tooling ecosystem in those three concepts, and that simplicity made the protocol easy to explain, implement, and debug. Compare this to earlier approaches that required understanding multiple abstraction layers before you could write your first integration.

The npm download data reflects the adoption curve. @modelcontextprotocol/sdk went from under 100,000 weekly downloads in mid-2024 to over 2 million by early 2026. That's a growth rate in the same range as tRPC and Drizzle — tools that also grew by becoming the default choice within a clearly-defined problem domain. The MCP ecosystem is following the same pattern: a well-designed protocol, fast adoption by major platforms, and a developer community that quickly builds the tooling around it. For a technical comparison of download trends for AI ecosystem packages, see the 20 fastest-growing npm packages in 2026.

TL;DR

The @modelcontextprotocol/sdk is the official, required foundation for any MCP implementation in Node.js. For faster development with less boilerplate, FastMCP offers a higher-level API built on top of it. MCP is now the de facto standard for giving AI models access to tools, resources, and prompts — and Node.js is the dominant implementation language.

Key Takeaways

  • @modelcontextprotocol/sdk: Official Anthropic SDK, supports stdio and Streamable HTTP transports
  • MCP v2 spec stabilized in early 2026, adding Streamable HTTP as the preferred remote transport
  • FastMCP provides a decorator-style API that reduces MCP server boilerplate by ~70%
  • MCP servers expose three primitives: Tools (callable functions), Resources (readable data), Prompts (reusable templates)
  • Claude Desktop, Cursor, Cline, Continue.dev, and VS Code Copilot all support MCP natively
  • The @modelcontextprotocol/server-filesystem, @modelcontextprotocol/server-github packages are pre-built reference servers
  • MCP clients use the same SDK to connect to any MCP server via stdio or HTTP

What Is the Model Context Protocol?

MCP is an open protocol (created by Anthropic, now maintained by the MCP community) that standardizes how AI models connect to external tools and data sources. Instead of every AI tool implementing its own plugin system, MCP provides a universal interface.

The key insight: instead of building custom integrations for each AI model and each tool, you build one MCP server that exposes your tools, and one MCP client per AI model. This is the same value proposition as USB — one standard connector replaces dozens of proprietary ones.

The Three MCP Primitives

Tools: Functions the model can call (like search_database, read_file, create_ticket) Resources: Data the model can read (like file://path/to/file, database://table/schema) Prompts: Reusable prompt templates with parameters

@modelcontextprotocol/sdk

Package: @modelcontextprotocol/sdk GitHub: modelcontextprotocol/typescript-sdk Current version: 1.x (v2 targeted Q2 2026) License: MIT

This is the official TypeScript SDK for MCP. Every serious MCP implementation in Node.js is built on it.

Installation

npm install @modelcontextprotocol/sdk

Building an MCP Server

import { McpServer, ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const server = new McpServer({
  name: 'my-tools-server',
  version: '1.0.0',
});

// Register a Tool
server.tool(
  'search_docs',
  'Search the documentation',
  {
    query: z.string().describe('Search query'),
    limit: z.number().optional().default(5),
  },
  async ({ query, limit }) => {
    const results = await searchDocumentation(query, limit);
    return {
      content: [{ type: 'text', text: JSON.stringify(results) }],
    };
  }
);

// Register a Resource
server.resource(
  'current_time',
  'Current server time',
  async () => ({
    contents: [{ uri: 'system://time', text: new Date().toISOString() }],
  })
);

// Register a Resource Template (parameterized)
server.resource(
  new ResourceTemplate('file://{path}', { list: undefined }),
  'Read a file',
  async ({ path }) => ({
    contents: [{ uri: `file://${path}`, text: await fs.readFile(path, 'utf-8') }],
  })
);

// Register a Prompt
server.prompt(
  'code_review',
  'Review code for issues',
  { code: z.string(), language: z.string().optional() },
  ({ code, language }) => ({
    messages: [{
      role: 'user',
      content: {
        type: 'text',
        text: `Review this ${language || 'code'} for bugs and improvements:\n\n${code}`,
      },
    }],
  })
);

// Start with stdio transport (for Claude Desktop, Cursor, etc.)
const transport = new StdioServerTransport();
await server.connect(transport);

Streamable HTTP Transport (MCP v2)

For remote MCP servers (accessed over the network rather than as a local process):

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import express from 'express';

const app = express();
app.use(express.json());

const transports = new Map<string, StreamableHTTPServerTransport>();

app.post('/mcp', async (req, res) => {
  const sessionId = req.headers['mcp-session-id'] as string;

  let transport = transports.get(sessionId);
  if (!transport) {
    transport = new StreamableHTTPServerTransport({ sessionIdGenerator: () => sessionId });
    const server = createMcpServer(); // Your server setup
    await server.connect(transport);
    transports.set(sessionId, transport);
  }

  await transport.handleRequest(req, res, req.body);
});

app.listen(3000, () => console.log('MCP server running on :3000'));

Building an MCP Client

import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

const transport = new StdioClientTransport({
  command: 'node',
  args: ['path/to/mcp-server.js'],
});

const client = new Client({ name: 'my-client', version: '1.0.0' }, {
  capabilities: { tools: {} },
});

await client.connect(transport);

// List available tools
const { tools } = await client.listTools();
console.log('Available tools:', tools.map(t => t.name));

// Call a tool
const result = await client.callTool({
  name: 'search_docs',
  arguments: { query: 'authentication', limit: 3 },
});
console.log(result.content);

await client.close();

FastMCP

Package: fastmcp GitHub: punkpeye/fastmcp GitHub stars: ~3K License: MIT

FastMCP provides a higher-level API that reduces boilerplate significantly:

import { FastMCP } from 'fastmcp';
import { z } from 'zod';

const server = new FastMCP('My Tools Server');

// Tools are defined with a simple decorator-style API
server.addTool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: z.object({
    location: z.string(),
    units: z.enum(['celsius', 'fahrenheit']).default('celsius'),
  }),
  execute: async ({ location, units }) => {
    const data = await fetchWeather(location, units);
    return `Temperature: ${data.temp}°${units === 'celsius' ? 'C' : 'F'}, ${data.condition}`;
  },
});

server.addResource({
  uri: 'config://app',
  name: 'Application Config',
  description: 'Current application configuration',
  load: async () => ({
    text: JSON.stringify(appConfig),
  }),
});

server.start({ transportType: 'stdio' });

FastMCP also includes a testing CLI tool, authentication support for HTTP transport, and error handling with user-friendly messages.

Pre-Built MCP Servers (Reference Implementations)

Anthropic provides official pre-built MCP servers as npm packages:

# File system access
npm install @modelcontextprotocol/server-filesystem

# GitHub integration
npm install @modelcontextprotocol/server-github

# Web search (Brave)
npm install @modelcontextprotocol/server-brave-search

# PostgreSQL database
npm install @modelcontextprotocol/server-postgres

# Memory (key-value store)
npm install @modelcontextprotocol/server-memory

# Slack
npm install @modelcontextprotocol/server-slack

These are production-ready servers you can run directly:

// Claude Desktop mcp_servers.json
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/Documents"
      ]
    },
    "github": {
      "command": "npx",
      "args": ["@modelcontextprotocol/server-github"],
      "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token" }
    }
  }
}

Express and Hono Middleware Packages

npm install @modelcontextprotocol/express  # Express.js helpers
npm install @modelcontextprotocol/hono     # Hono helpers

These packages simplify adding MCP Streamable HTTP support to existing web servers:

import { Hono } from 'hono';
import { mcpMiddleware } from '@modelcontextprotocol/hono';

const app = new Hono();
app.use('/mcp/*', mcpMiddleware({ server: createMcpServer() }));

Integration with AI Frameworks

With Vercel AI SDK

AI SDK doesn't have native MCP client support yet, but you can bridge them:

import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { tool } from 'ai';
import { z } from 'zod';

// Convert MCP tools to AI SDK tools
async function mcpToolsToAiSdkTools(client: Client) {
  const { tools } = await client.listTools();
  return Object.fromEntries(
    tools.map(mcpTool => [
      mcpTool.name,
      tool({
        description: mcpTool.description || '',
        parameters: z.object({}), // Parse mcpTool.inputSchema
        execute: async (args) => {
          const result = await client.callTool({
            name: mcpTool.name,
            arguments: args,
          });
          return result.content[0]?.text || '';
        },
      }),
    ])
  );
}

With LangChain.js

LangChain has first-class MCP support:

import { MultiServerMCPClient } from '@langchain/mcp-adapters';

const mcpClient = new MultiServerMCPClient({
  filesystem: {
    transport: 'stdio',
    command: 'npx',
    args: ['@modelcontextprotocol/server-filesystem', '/tmp'],
  },
  github: {
    transport: 'stdio',
    command: 'npx',
    args: ['@modelcontextprotocol/server-github'],
    env: { GITHUB_PERSONAL_ACCESS_TOKEN: process.env.GITHUB_TOKEN },
  },
});

const tools = await mcpClient.getTools();
// tools is array of LangChain Tool objects, ready to use with agents

Comparison: SDK vs FastMCP

Aspect@modelcontextprotocol/sdkFastMCP
OfficialYes (Anthropic)Community
API styleVerbose, explicitConcise, opinionated
BoilerplateHighLow
FlexibilityMaximumGood
TypeScriptExcellentGood
TestingManualBuilt-in test runner
Auth supportManualBuilt-in
HTTP transportManual setupSimplified

Choosing the Right Approach for Your Use Case

The table above captures the trade-offs, but the practical decision depends on what you're building and what your constraints are.

Use @modelcontextprotocol/sdk directly when you need maximum control over the server implementation. Specifically: when you're building a production server that needs custom session management, when you need fine-grained control over how individual requests are handled and errors surfaced, or when you're integrating MCP into an existing service that already has its own HTTP infrastructure. The official SDK is verbose by design — it exposes every protocol primitive explicitly, which means fewer surprises in production and easier debugging when something goes wrong. The Streamable HTTP transport example in this article uses the SDK directly, and that's representative: if your server has lifecycle requirements beyond a simple stdio process, you'll want the control the SDK provides.

Use FastMCP when you're moving fast and the server's primary value is the tools it exposes, not the infrastructure around them. A team building an internal MCP server to give their AI assistant access to a database, a REST API, and a document store doesn't need custom session handling — they need to register tools quickly and have them work reliably. FastMCP's addTool API gets you from zero to a working tool in about 15 lines. Its built-in test runner (via fastmcp dev) is meaningfully useful during development: you get a CLI interface to call your tools and inspect responses without setting up Claude Desktop or another MCP client. The built-in auth support for HTTP transport is also worth considering — if you're deploying a remote MCP server that needs Bearer token authentication, FastMCP handles this without manual middleware.

Use pre-built servers (@modelcontextprotocol/server-*) when the integration you need already exists. The filesystem, GitHub, PostgreSQL, and Slack servers cover a substantial portion of the use cases developers reach for first. Running a pre-built server is a configuration task, not a programming task — you're editing a JSON config file, not writing a TypeScript server. The practical consideration here is maintenance: when the MCP spec updates, the official pre-built servers update to match, whereas custom servers you've written require manual updates.

The rule of thumb: start with a pre-built server if one exists for your use case. If you need customization, evaluate FastMCP first. Reach for the SDK directly when you need something FastMCP doesn't expose. Most production MCP deployments end up using a mix — the official pre-built servers for common integrations, FastMCP for custom internal tools, and occasionally the SDK directly for complex cases.

Testing Your MCP Server

The MCP ecosystem provides testing tools:

# Official MCP Inspector (web UI for testing servers)
npx @modelcontextprotocol/inspector node path/to/server.js

# FastMCP built-in test runner
fastmcp dev path/to/server.ts

The MCP Inspector opens a web UI where you can list tools, call them with arguments, and inspect responses.

Configuration for Claude Desktop

// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["/path/to/your/mcp-server.js"],
      "env": {
        "DATABASE_URL": "postgresql://..."
      }
    }
  }
}

The MCP Ecosystem in 2026

MCP has matured significantly since its November 2023 launch:

  • Specification: v2 stabilized, Streamable HTTP replaces SSE as the standard remote transport
  • Adoption: All major AI coding tools support it natively
  • Registry: Unofficial MCP server registries aggregate hundreds of community-built servers
  • Security: Authentication and authorization primitives added in v1.5+

The most common MCP server use cases in production: database access, file system operations, web search, API integrations, internal tooling, and code execution sandboxes.

Production Considerations

Running an MCP server in development against Claude Desktop is straightforward. Running one in production against multiple clients, at scale, with security requirements, is a different problem.

Authentication for remote MCP servers — those using Streamable HTTP rather than stdio — requires explicit handling. The MCP v2 spec supports Bearer token authentication in the Authorization header, and the SDK exposes hooks to validate tokens before handling requests. In practice, most production deployments use either a static pre-shared token (acceptable for internal servers on a private network) or OAuth 2.0 with short-lived tokens (required for anything user-facing). FastMCP's built-in auth support covers the Bearer token case. For OAuth, you'll need to implement the token exchange flow yourself and pass the validated identity into your tool handlers via context.

Rate limiting MCP servers is often overlooked until an AI agent in a loop calls the same tool 200 times in 60 seconds. MCP tool calls can be fast — fast enough that an agent working through a task list can exhaust API quotas, saturate a database connection pool, or trigger downstream rate limits before a human can intervene. Implement per-session and per-tool rate limits at the transport layer, before your tool handlers execute. A simple token bucket implementation at the session level with per-tool overrides covers most cases.

Monitoring MCP calls requires thinking about what's observable. At minimum, log every tool invocation with: session ID, tool name, argument schema hash (not raw arguments, which may contain secrets), duration, and error status. This gives you enough to reconstruct what an agent did in a session and diagnose failures. For latency tracking, the p99 matters more than average — agents block on tool calls synchronously, so a slow tool at the 99th percentile creates visible user-facing delays.

Error handling patterns in MCP tool handlers deserve attention. The MCP spec distinguishes between tool execution errors (the tool ran but returned an error result) and protocol errors (something went wrong before the tool ran). Tool execution errors should return structured error content in the tool result — the model can read and reason about these. Protocol errors propagate as exceptions and terminate the tool call. A common mistake is throwing unhandled exceptions from tool handlers that contain raw error messages with internal details. Wrap your tool handlers with error boundaries that sanitize error messages before returning them to the client.

For performance, the most impactful optimization is usually tool handler latency, not transport overhead. The MCP transport layer adds under 5ms of overhead for local stdio connections. If your tool calls feel slow, the bottleneck is the handler — usually a database query, API call, or file I/O operation. Instrument your handlers to identify the slow operations before reaching for architectural changes.

MCP Server Registry and Discovery

One of the practical challenges with MCP's success is discovery — when the community has built hundreds of servers, how do developers find the one that integrates with the service they need?

Several unofficial registries have emerged to address this. The most prominent aggregate metadata about published MCP servers: what tools they expose, what credentials they require, whether they support stdio or HTTP transport, and community ratings. These registries aren't part of the official MCP specification, but they've become a de facto layer in the ecosystem. When evaluating a community-built MCP server from a registry, treat it with the same scrutiny as any third-party npm package — check the GitHub repository for activity, review the code for the tools you'll be using, and verify that the package doesn't request more system access than the integration requires.

The npm registry itself is where most MCP servers are distributed, using the convention of an mcp-server- prefix in the package name (e.g., mcp-server-github, mcp-server-linear, mcp-server-notion). You can search for available servers with npm search mcp-server. This isn't a curated list, but it reflects actual published implementations across dozens of services. The official @modelcontextprotocol/ namespace on npm contains only the Anthropic-maintained servers; everything outside that namespace is community-built.

For internal tooling, the registry model doesn't apply — you build and distribute servers through your own artifact management. The pattern here is treating your MCP server as an internal service: versioned releases, a changelog, and a configuration template your team can paste into their Claude Desktop or Cursor config. The hidden cost of npm dependencies analysis applies here too: third-party MCP servers are dependencies with the same security profile as any other npm package.

Explore on PkgPulse

See download trends and version history for MCP packages on PkgPulse.

See also: 20 Fastest-Growing npm Packages in 2026 and The Hidden Cost of npm Dependencies, comparing Express vs Fastify for MCP HTTP transport hosting.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.