Skip to main content

Guide

Mastra vs LangChain.js vs Google GenKit 2026

Mastra vs LangChain.js vs Google GenKit compared for 2026. Agent creation, tool calling, RAG pipelines, memory, streaming, and TypeScript support data.

·PkgPulse Team·
0

Mastra vs LangChain.js vs Google GenKit: JavaScript AI Agent Frameworks 2026

TL;DR

LangChain.js is the most mature option — a comprehensive toolkit with thousands of integrations, but notorious for abstraction complexity and frequent breaking changes. Google GenKit is Firebase's opinionated AI framework — clean TypeScript APIs, strong Google Cloud integration, and production-ready flows. Mastra is the newcomer disrupting both — a TypeScript-first framework designed to eliminate the "LangChain tax" of boilerplate, with native workflow orchestration, memory, and tool calling baked in. For new projects in 2026, start with GenKit or Mastra before reaching for LangChain.

Key Takeaways

  • LangChain.js npm downloads: ~1.2M/week — the dominant framework despite frequent complaints about complexity
  • Mastra GitHub stars: ~18k (Feb 2026) — explosive growth since its 1.0 launch in late 2025
  • GenKit (Firebase AI) now ships as @genkit-ai/core — production ready with Vertex AI, Cloud Run, and Cloud Functions integrations
  • All three support streaming via async generators, but Mastra and GenKit have cleaner TypeScript types
  • Tool calling / function calling is a first-class feature in all three, but the ergonomics differ significantly
  • RAG pipelines: LangChain.js offers the most pre-built vector store integrations (50+); Mastra and GenKit require more DIY assembly
  • Mastra's workflow engine is a unique differentiator — persistent, resumable workflows with built-in retry and observability

Why AI Agent Frameworks for JavaScript?

Python has always dominated ML/AI, but JavaScript is catching up fast. The reasons:

  1. Frontend AI — browser-side inference and real-time streaming UIs need JS
  2. Full-stack TypeScript — teams don't want to context-switch to Python for their API layer
  3. Edge deployment — Cloudflare Workers, Vercel Edge, Deno Deploy need JS runtimes

The result: a rapidly maturing ecosystem of JS/TS AI frameworks, with LangChain.js, GenKit, and Mastra as the current leading options.


LangChain.js: The Comprehensive Framework

LangChain.js is the JavaScript port of the original Python LangChain library. It's enormously comprehensive — covering everything from simple LLM calls to complex multi-agent orchestration — but this breadth comes at the cost of abstraction complexity.

Basic LLM Chain

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  modelName: "gpt-4o",
  temperature: 0.7,
  streamUsage: true,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant that explains code clearly."],
  ["human", "{question}"],
]);

const parser = new StringOutputParser();

// LCEL (LangChain Expression Language) chain
const chain = prompt.pipe(model).pipe(parser);

// Invoke
const result = await chain.invoke({
  question: "Explain async/await in JavaScript",
});

// Stream
const stream = chain.stream({ question: "What is a closure?" });
for await (const chunk of stream) {
  process.stdout.write(chunk);
}

Tool Calling with LangChain.js

import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";

// Define tools
const weatherTool = tool(
  async ({ city }: { city: string }) => {
    // Fetch real weather data
    const response = await fetch(`https://api.weather.example.com/${city}`);
    const data = await response.json();
    return `Temperature in ${city}: ${data.temp}°C, ${data.condition}`;
  },
  {
    name: "get_weather",
    description: "Get current weather for a city",
    schema: z.object({
      city: z.string().describe("The city name"),
    }),
  }
);

const searchTool = tool(
  async ({ query }: { query: string }) => {
    // Search implementation
    return `Search results for: ${query}`;
  },
  {
    name: "search_web",
    description: "Search the web for information",
    schema: z.object({
      query: z.string().describe("Search query"),
    }),
  }
);

const llm = new ChatOpenAI({ modelName: "gpt-4o" });
const tools = [weatherTool, searchTool];
const llmWithTools = llm.bindTools(tools);

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant. Use tools when needed."],
  ["placeholder", "{chat_history}"],
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const agent = createToolCallingAgent({ llm: llmWithTools, tools, prompt });
const executor = new AgentExecutor({ agent, tools, verbose: true });

const result = await executor.invoke({
  input: "What's the weather in Tokyo and find me a good sushi restaurant there?",
  chat_history: [],
});

console.log(result.output);

RAG Pipeline

import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { Document } from "@langchain/core/documents";

// Create vector store
const embeddings = new OpenAIEmbeddings({ modelName: "text-embedding-3-small" });
const vectorStore = new MemoryVectorStore(embeddings);

// Split and embed documents
const splitter = new RecursiveCharacterTextSplitter({
  chunkSize: 1000,
  chunkOverlap: 200,
});

const docs = await splitter.splitDocuments([
  new Document({ pageContent: "Your knowledge base content here..." }),
]);

await vectorStore.addDocuments(docs);

// Create RAG chain
const retriever = vectorStore.asRetriever({ k: 4 });

const questionAnsweringChain = await createStuffDocumentsChain({
  llm: new ChatOpenAI({ modelName: "gpt-4o-mini" }),
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "Answer based on context:\n\n{context}"],
    ["human", "{input}"],
  ]),
});

const ragChain = await createRetrievalChain({
  retriever,
  combineDocsChain: questionAnsweringChain,
});

const answer = await ragChain.invoke({ input: "What is the main topic?" });
console.log(answer.answer);

Google GenKit: Firebase's AI Framework

GenKit (rebranded as Firebase AI in late 2024) is Google's TypeScript-first AI framework. It emphasizes flows (composable async functions with tracing), plugins (for different AI providers), and seamless Google Cloud integration.

Setup and Basic Flow

import { genkit } from "genkit";
import { googleAI } from "@genkit-ai/googleai";
import { openAI } from "genkitx-openai";

// Initialize with plugins
const ai = genkit({
  plugins: [
    googleAI({ apiKey: process.env.GOOGLE_AI_API_KEY }),
    openAI({ apiKey: process.env.OPENAI_API_KEY }),
  ],
  model: "googleai/gemini-1.5-flash",
});

// Simple generation
const { text } = await ai.generate("Explain monads in JavaScript");
console.log(text);

// With structured output (Zod schema)
import { z } from "zod";

const AnalysisSchema = z.object({
  sentiment: z.enum(["positive", "negative", "neutral"]),
  confidence: z.number().min(0).max(1),
  keywords: z.array(z.string()).max(5),
});

const { output } = await ai.generate({
  model: "googleai/gemini-1.5-pro",
  prompt: `Analyze this review: "${reviewText}"`,
  output: { schema: AnalysisSchema },
});

console.log(output); // Typed as AnalysisSchema
// { sentiment: 'positive', confidence: 0.92, keywords: ['excellent', 'fast'] }

GenKit Flows — The Core Abstraction

import { genkit, z } from "genkit";
import { googleAI } from "@genkit-ai/googleai";

const ai = genkit({ plugins: [googleAI()] });

// Define a reusable, traceable flow
const summarizeFlow = ai.defineFlow(
  {
    name: "summarizeArticle",
    inputSchema: z.object({
      url: z.string().url(),
      maxWords: z.number().default(200),
    }),
    outputSchema: z.object({
      summary: z.string(),
      keyPoints: z.array(z.string()),
      readingTime: z.number(),
    }),
  },
  async ({ url, maxWords }) => {
    // Fetch article
    const response = await fetch(url);
    const html = await response.text();
    const text = extractText(html); // Your text extraction logic

    // Generate structured summary
    const { output } = await ai.generate({
      prompt: `Summarize this article in ${maxWords} words max:

      ${text}

      Return JSON with: summary, keyPoints (array), readingTime (minutes)`,
      output: {
        schema: z.object({
          summary: z.string(),
          keyPoints: z.array(z.string()),
          readingTime: z.number(),
        }),
      },
    });

    return output!;
  }
);

// Use the flow
const result = await summarizeFlow({
  url: "https://example.com/long-article",
  maxWords: 150,
});

GenKit Tool Calling

const weatherTool = ai.defineTool(
  {
    name: "getWeather",
    description: "Get current weather for a location",
    inputSchema: z.object({
      location: z.string().describe("City name or coordinates"),
      unit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
    }),
    outputSchema: z.object({
      temperature: z.number(),
      condition: z.string(),
      humidity: z.number(),
    }),
  },
  async ({ location, unit }) => {
    const data = await fetchWeatherAPI(location, unit);
    return { temperature: data.temp, condition: data.desc, humidity: data.humidity };
  }
);

// Use tool in generation
const response = await ai.generate({
  model: "googleai/gemini-1.5-pro",
  prompt: "What should I wear today in London?",
  tools: [weatherTool],
});

// GenKit handles tool calls automatically and streams results

GenKit with Streaming

const streamingFlow = ai.defineFlow(
  {
    name: "writeStory",
    inputSchema: z.object({ prompt: z.string() }),
    outputSchema: z.string(),
  },
  async ({ prompt }) => {
    const { stream, response } = ai.generateStream({
      prompt: `Write a short story about: ${prompt}`,
      model: "googleai/gemini-1.5-flash",
    });

    // Stream to client
    for await (const chunk of stream) {
      process.stdout.write(chunk.text);
    }

    return (await response).text;
  }
);

// In Express/Next.js: pipe stream to response
app.post("/story", async (req, res) => {
  res.setHeader("Content-Type", "text/event-stream");
  const { stream } = ai.generateStream({ prompt: req.body.prompt });
  for await (const chunk of stream) {
    res.write(`data: ${chunk.text}\n\n`);
  }
  res.end();
});

Mastra: TypeScript-First Agent Orchestration

Mastra is the newest of the three — it launched 1.0 in Q4 2025 and immediately attracted attention for its clean TypeScript-first API, workflow orchestration with durable execution, and native agent memory. It's designed specifically to address LangChain's complexity and abstraction issues.

Setup and Basic Agent

import { Mastra } from "@mastra/core";
import { openai } from "@mastra/openai";
import { z } from "zod";

// Initialize
const mastra = new Mastra({
  providers: [
    openai({ apiKey: process.env.OPENAI_API_KEY }),
  ],
});

// Define and create an agent
const codeReviewAgent = mastra.createAgent({
  name: "code-reviewer",
  model: "gpt-4o",
  systemPrompt: `You are an expert code reviewer. You provide concise, actionable feedback
  focusing on: correctness, performance, security, and readability.`,
  tools: [], // Add tools below
});

// Generate
const result = await codeReviewAgent.generate(
  "Review this function:\n\n```javascript\nfunction sum(a, b) { return a + b; }\n```"
);

console.log(result.text);

// Stream
const stream = codeReviewAgent.stream("Explain this TypeScript pattern: ...");
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

Mastra Tool Calling

import { createTool } from "@mastra/core";
import { z } from "zod";

const githubTool = createTool({
  id: "search-github",
  description: "Search GitHub repositories and code",
  inputSchema: z.object({
    query: z.string().describe("Search query"),
    language: z.string().optional().describe("Filter by programming language"),
    limit: z.number().default(5).describe("Max results"),
  }),
  outputSchema: z.object({
    repositories: z.array(
      z.object({
        name: z.string(),
        stars: z.number(),
        description: z.string().nullable(),
        url: z.string(),
      })
    ),
  }),
  execute: async ({ context }) => {
    const { query, language, limit } = context;
    const q = language ? `${query} language:${language}` : query;

    const response = await fetch(
      `https://api.github.com/search/repositories?q=${encodeURIComponent(q)}&per_page=${limit}`,
      { headers: { Authorization: `Bearer ${process.env.GITHUB_TOKEN}` } }
    );
    const data = await response.json();

    return {
      repositories: data.items.map((repo: any) => ({
        name: repo.full_name,
        stars: repo.stargazers_count,
        description: repo.description,
        url: repo.html_url,
      })),
    };
  },
});

// Create agent with tool
const devAgent = mastra.createAgent({
  name: "dev-assistant",
  model: "gpt-4o",
  tools: { githubSearch: githubTool },
  systemPrompt: "You help developers find libraries and code examples.",
});

const result = await devAgent.generate(
  "Find the top 3 TypeScript ORMs on GitHub"
);
console.log(result.text);

Mastra Workflows — Durable Orchestration

import { createStep, createWorkflow } from "@mastra/core";
import { z } from "zod";

// Each step is a durable, retryable unit
const fetchArticleStep = createStep({
  id: "fetch-article",
  inputSchema: z.object({ url: z.string() }),
  outputSchema: z.object({ content: z.string(), title: z.string() }),
  execute: async ({ inputData }) => {
    const response = await fetch(inputData.url);
    const html = await response.text();
    return { content: extractText(html), title: extractTitle(html) };
  },
});

const summarizeStep = createStep({
  id: "summarize",
  inputSchema: z.object({ content: z.string(), title: z.string() }),
  outputSchema: z.object({ summary: z.string(), tags: z.array(z.string()) }),
  execute: async ({ inputData, mastra }) => {
    const agent = mastra.getAgent("summarizer");
    const result = await agent.generate(
      `Summarize this article titled "${inputData.title}":\n\n${inputData.content.slice(0, 5000)}`
    );

    // Parse structured output
    return JSON.parse(result.text);
  },
});

const publishStep = createStep({
  id: "publish-summary",
  inputSchema: z.object({
    summary: z.string(),
    tags: z.array(z.string()),
  }),
  outputSchema: z.object({ publishedId: z.string() }),
  execute: async ({ inputData }) => {
    const response = await fetch("/api/summaries", {
      method: "POST",
      body: JSON.stringify(inputData),
    });
    const { id } = await response.json();
    return { publishedId: id };
  },
});

// Compose workflow
const articlePipelineWorkflow = createWorkflow({
  id: "article-pipeline",
  inputSchema: z.object({ url: z.string().url() }),
  outputSchema: z.object({ publishedId: z.string() }),
})
  .then(fetchArticleStep)
  .then(summarizeStep)
  .then(publishStep)
  .commit();

// Register and execute
mastra.addWorkflow(articlePipelineWorkflow);

const run = mastra.getWorkflow("article-pipeline").createRun();
const result = await run.start({
  triggerData: { url: "https://example.com/article" },
});

console.log("Workflow result:", result.results);

Mastra Agent Memory

import { Memory } from "@mastra/memory";
import { openai } from "@mastra/openai";

// Persistent memory with semantic search
const memory = new Memory({
  provider: "postgresql",
  connectionString: process.env.DATABASE_URL,
  embedProvider: openai({ model: "text-embedding-3-small" }),
});

const assistantWithMemory = mastra.createAgent({
  name: "personal-assistant",
  model: "gpt-4o",
  memory,
  systemPrompt: "You remember context from previous conversations.",
});

// Conversations are automatically stored and retrieved
const conv1 = await assistantWithMemory.generate(
  "My favorite programming language is TypeScript",
  { threadId: "user-alice", resourceId: "alice@example.com" }
);

// Later conversation — agent remembers previous context
const conv2 = await assistantWithMemory.generate(
  "What should I use for my next project?",
  { threadId: "user-alice", resourceId: "alice@example.com" }
);
// Agent will recall: "Alice mentioned TypeScript as her favorite language"

Feature Comparison

FeatureLangChain.jsGenKitMastra
TypeScript types⭐⭐⭐ (improving)⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
LLM providers50+Google AI, OpenAI, Ollama, + pluginsOpenAI, Anthropic, Google, Groq, +
Tool calling
Streaming
RAG integrations⭐⭐⭐⭐⭐ (50+ vector stores)⭐⭐⭐⭐⭐⭐
Workflow engineLangGraph (separate)Flows✅ Native
Agent memory✅ (external setup)❌ (manual)✅ Built-in
ObservabilityLangSmith (paid)Firebase/Google Cloud✅ Built-in traces
Durable executionVia LangGraph✅ Suspend/resume
BoilerplateHighMediumLow
Weekly npm downloads~1.2M~250k~200k
GitHub stars35k+3.5k18k
API stabilityFrequent breakingStableStable (post 1.0)
Learning curveHighMediumLow-Medium

When to Use Each

Choose LangChain.js if:

  • You need integrations with dozens of vector stores, document loaders, or LLM providers out-of-the-box
  • Your team already has LangChain.js experience and knows how to navigate the API surface
  • You need LangGraph for complex stateful multi-agent systems (LangChain's graph orchestration layer)
  • The extensive community and tutorials matter more than clean APIs

Choose GenKit if:

  • You're already in the Google Cloud / Firebase ecosystem
  • You want clean TypeScript with Zod-based output schemas
  • You're deploying to Cloud Run, Cloud Functions, or Firebase App Hosting
  • You want production-grade observability built into your AI calls by default

Choose Mastra if:

  • You're starting a new TypeScript AI project and want the best DX
  • You need durable, resumable workflows for multi-step AI pipelines
  • You want built-in agent memory without setting up your own memory infra
  • You're frustrated with LangChain's abstraction complexity

Production Observability and Debugging

Observability is where these frameworks diverge most sharply in production maturity. LangChain.js integrates with LangSmith, a paid tracing and monitoring platform that records every chain invocation, tool call, token count, and latency. LangSmith is genuinely useful for debugging complex multi-step chains where the failure is several hops from the surface error — it provides a visual timeline of the execution with inputs, outputs, and intermediate steps at each stage. The cost is LangSmith pricing (which adds to the already-significant LLM API costs) and the vendor lock-in of having your trace data in LangSmith's platform. For teams that already pay for Datadog or Honeycomb, a community OpenTelemetry exporter for LangChain.js provides an alternative, though with less LLM-specific context than LangSmith.

GenKit's observability story leverages Google Cloud's operations suite (formerly Stackdriver). When deployed on Cloud Run or Cloud Functions, GenKit flows automatically emit traces to Cloud Trace and logs to Cloud Logging with structured metadata about each flow execution. For teams already in the Google Cloud ecosystem, this is zero-configuration observability — the traces appear in the same console as the rest of your infrastructure. For teams not on Google Cloud, GenKit's observability is less compelling; the Firebase emulator includes a local tracing dashboard for development, but production monitoring requires either Google Cloud or a custom OpenTelemetry integration.

Mastra includes built-in tracing via OpenTelemetry as a first-class feature. Every agent call, tool execution, and workflow step emits spans that can be exported to any OpenTelemetry-compatible backend — Honeycomb, Jaeger, Datadog, or Grafana. The Mastra dashboard (included in the development server) shows a visual workflow execution timeline locally. In production, Mastra's traces include the LLM provider, model name, token counts, and tool call results as span attributes, making cost attribution and latency debugging straightforward without an additional paid tracing service. For teams committed to an observability-first engineering culture, Mastra's OpenTelemetry integration is a significant advantage over LangChain's LangSmith dependency.

TypeScript Integration and Schema-First Design

Type safety across the LLM boundary is one of the most practically valuable features of AI frameworks, and the three libraries differ significantly in how well they deliver it. LangChain.js's LCEL (LangChain Expression Language) chains frequently lose TypeScript type inference at composition boundaries — the .pipe() chaining API returns broadly typed intermediaries that force as any casts or manual type annotations when the chain's generic types cannot be inferred. This is an acknowledged technical debt in LangChain.js's architecture, and while it has improved across versions, it remains a friction point for TypeScript teams who want full end-to-end type safety in their AI pipelines.

GenKit and Mastra both use Zod schemas as the primary mechanism for typed inputs and outputs. GenKit's ai.defineFlow({ inputSchema: z.object({}), outputSchema: z.object({}) }) infers the TypeScript types for the flow's input and output parameters from the Zod schema — calling summarizeFlow({ url: 123 }) is a compile-time error if url is declared as z.string(). Mastra's createTool({ inputSchema: z.object({}), outputSchema: z.object({}) }) applies the same pattern to tool definitions. The TypeScript inference works correctly through the execution path — tool.execute({ context }) receives the typed input, and the return value is checked against the output schema at runtime. For teams building production AI applications where type errors should be caught before deployment rather than in production logs, GenKit and Mastra's Zod-first approach is a meaningful productivity advantage.

The LangChain Fatigue Factor

Developer sentiment in 2025–2026 shows a clear trend: LangChain.js adoption is plateauing while Mastra and GenKit are accelerating. Common complaints:

  • Frequent breaking changes — v0.1 to v0.2 migrations were painful
  • Too many abstractionsChatPromptTemplate.fromMessages([["human", ...]]) instead of just passing a string
  • Type inference failures — LCEL chains often lose TypeScript inference, returning any
  • Hidden behavior — the chain abstraction hides what's actually happening in HTTP calls

Mastra explicitly addresses these by removing the chain abstraction and using direct function calls with strong TypeScript types throughout.


Methodology

Data sourced from npm download statistics (npmjs.com, January 2026), GitHub repositories (star counts as of February 2026), official documentation, and community discussion on Twitter/X and Discord servers. npm download data: LangChain.js (@langchain/core: 1.2M/week), GenKit (@genkit-ai/core: 250k/week), Mastra (@mastra/core: 200k/week).


Related: AI SDK vs LangChain: JavaScript 2026 for simpler LLM integration, or OpenTelemetry vs Sentry vs Datadog for observability infrastructure.

See also: ElevenLabs vs OpenAI TTS vs Cartesia and Vercel AI SDK vs OpenAI vs Anthropic SDK 2026

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.