Skip to main content

Redpanda vs NATS vs Apache Kafka: Event Streaming Platforms Compared (2026)

·PkgPulse Team

TL;DR: Redpanda is the Kafka-compatible streaming platform written in C++ — same API, no JVM, no ZooKeeper, lower latency and simpler operations. NATS is the lightweight cloud-native messaging system — pub/sub, request-reply, JetStream for persistence, and built for polyglot microservices with minimal overhead. Apache Kafka is the original distributed event streaming platform — massive ecosystem, exactly-once semantics, Kafka Streams, Connect, and battle-tested at every major tech company. In 2026: Redpanda for Kafka compatibility without the operational burden, NATS for lightweight microservice communication, Kafka for the full ecosystem and enterprise-scale streaming.

Key Takeaways

  • Redpanda: Kafka API compatible, C++ (no JVM). Single binary, no ZooKeeper, built-in Schema Registry and HTTP Proxy. p99 latency 10x lower than Kafka. Best for teams wanting Kafka compatibility with simpler operations
  • NATS: Lightweight Go binary, 10MB footprint. Core pub/sub + JetStream for persistence. Request-reply pattern, key-value store, object store. Best for microservice communication and edge computing
  • Apache Kafka: Java/Scala, distributed log. Kafka Streams, Connect, Schema Registry, exactly-once semantics. Massive ecosystem. Best for enterprise event-driven architectures with complex stream processing

Redpanda — Kafka-Compatible, Zero JVM

Redpanda gives you a Kafka-compatible streaming platform in a single binary — no JVM, no ZooKeeper, and 10x lower p99 latency.

Producer — Using KafkaJS (100% Compatible)

// Redpanda is Kafka API compatible — use any Kafka client
import { Kafka, Partitioners } from "kafkajs";

const kafka = new Kafka({
  clientId: "order-service",
  brokers: ["redpanda-0:9092", "redpanda-1:9092", "redpanda-2:9092"],
  // Same config as Kafka — just point to Redpanda brokers
});

const producer = kafka.producer({
  createPartitioner: Partitioners.DefaultPartitioner,
  idempotent: true, // Exactly-once production
});

await producer.connect();

// Send events
await producer.send({
  topic: "orders",
  messages: [
    {
      key: "order-123",
      value: JSON.stringify({
        orderId: "order-123",
        customerId: "cust-42",
        items: [{ sku: "WIDGET-A", quantity: 3, price: 29.99 }],
        total: 89.97,
        timestamp: Date.now(),
      }),
      headers: {
        "event-type": "order.created",
        "correlation-id": correlationId,
      },
    },
  ],
});

// Batch send for high throughput
await producer.sendBatch({
  topicMessages: [
    {
      topic: "orders",
      messages: orders.map((o) => ({
        key: o.orderId,
        value: JSON.stringify(o),
      })),
    },
    {
      topic: "analytics",
      messages: analyticsEvents.map((e) => ({
        key: e.userId,
        value: JSON.stringify(e),
      })),
    },
  ],
});

Consumer with Consumer Groups

const consumer = kafka.consumer({
  groupId: "order-processor",
  sessionTimeout: 30000,
  heartbeatInterval: 3000,
});

await consumer.connect();
await consumer.subscribe({
  topics: ["orders"],
  fromBeginning: false,
});

await consumer.run({
  eachMessage: async ({ topic, partition, message }) => {
    const order = JSON.parse(message.value!.toString());
    const eventType = message.headers?.["event-type"]?.toString();

    console.log(`[${topic}:${partition}] ${eventType}: ${order.orderId}`);

    switch (eventType) {
      case "order.created":
        await processNewOrder(order);
        break;
      case "order.paid":
        await fulfillOrder(order);
        break;
      case "order.cancelled":
        await handleCancellation(order);
        break;
    }
  },
  // Auto-commit offsets after processing
  autoCommitInterval: 5000,
});

// Batch processing for higher throughput
await consumer.run({
  eachBatch: async ({ batch, resolveOffset, heartbeat }) => {
    for (const message of batch.messages) {
      await processMessage(message);
      resolveOffset(message.offset);
      await heartbeat(); // Keep session alive during long batches
    }
  },
});

Redpanda Admin API (rpk + HTTP)

// Redpanda HTTP Proxy — REST API for admin operations
const REDPANDA_URL = "http://redpanda:8082";

// Create a topic
await fetch(`${REDPANDA_URL}/topics`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    topic: "events",
    partitions: 12,
    replication_factor: 3,
    configs: {
      "retention.ms": "604800000", // 7 days
      "segment.bytes": "1073741824", // 1 GB segments
      "cleanup.policy": "delete",
    },
  }),
});

// Produce via HTTP (useful for serverless/edge)
await fetch(`${REDPANDA_URL}/topics/events`, {
  method: "POST",
  headers: { "Content-Type": "application/vnd.kafka.json.v2+json" },
  body: JSON.stringify({
    records: [
      { key: "evt-1", value: { type: "page_view", url: "/pricing" } },
      { key: "evt-2", value: { type: "signup", email: "user@ex.com" } },
    ],
  }),
});

// Schema Registry — Redpanda includes it built-in
await fetch(`${REDPANDA_URL}/subjects/orders-value/versions`, {
  method: "POST",
  headers: { "Content-Type": "application/vnd.schemaregistry.v1+json" },
  body: JSON.stringify({
    schemaType: "JSON",
    schema: JSON.stringify({
      type: "object",
      properties: {
        orderId: { type: "string" },
        total: { type: "number" },
        items: { type: "array" },
      },
      required: ["orderId", "total"],
    }),
  }),
});

NATS — Lightweight Cloud-Native Messaging

NATS is a simple, high-performance messaging system — core pub/sub, request-reply, and JetStream for persistent streaming.

Core Pub/Sub

import { connect, StringCodec, JSONCodec } from "nats";

const nc = await connect({
  servers: ["nats://localhost:4222"],
  // Cluster:
  // servers: ["nats://node-1:4222", "nats://node-2:4222", "nats://node-3:4222"],
});

const jc = JSONCodec();

// Publish a message (fire-and-forget)
nc.publish("orders.created", jc.encode({
  orderId: "order-123",
  customerId: "cust-42",
  total: 89.97,
}));

// Subscribe to messages
const sub = nc.subscribe("orders.*");
(async () => {
  for await (const msg of sub) {
    const order = jc.decode(msg.data);
    const eventType = msg.subject; // "orders.created", "orders.paid", etc.

    console.log(`Received ${eventType}:`, order);
    await processOrder(eventType, order);
  }
})();

// Wildcard subscriptions
nc.subscribe("orders.>"); // Match all under orders.*.*
nc.subscribe("*.created"); // Match any service's .created events

// Queue groups — load balance across consumers
const qsub = nc.subscribe("orders.created", { queue: "order-processors" });
// Only ONE subscriber in the queue group receives each message
(async () => {
  for await (const msg of qsub) {
    await processOrder(jc.decode(msg.data));
  }
})();

Request-Reply Pattern

// Request-reply — synchronous-style RPC over NATS
// Service: handle requests
const sub = nc.subscribe("inventory.check");
(async () => {
  for await (const msg of sub) {
    const request = jc.decode(msg.data);
    const stock = await checkInventory(request.sku);

    // Reply to the requester
    msg.respond(jc.encode({
      sku: request.sku,
      available: stock > 0,
      quantity: stock,
    }));
  }
})();

// Client: send request and wait for reply
const response = await nc.request(
  "inventory.check",
  jc.encode({ sku: "WIDGET-A" }),
  { timeout: 5000 } // 5 second timeout
);

const inventory = jc.decode(response.data);
console.log(`Stock for WIDGET-A: ${inventory.quantity}`);

// Scatter-gather — request from multiple services
// Each service with the same subscription gets the request
// First response wins (or collect all within timeout)

JetStream — Persistent Streaming

// JetStream adds persistence, exactly-once delivery, and replay
const js = nc.jetstream();
const jsm = await nc.jetstreamManager();

// Create a stream (like a Kafka topic)
await jsm.streams.add({
  name: "ORDERS",
  subjects: ["orders.>"], // Capture all order events
  retention: "limits", // "limits" | "interest" | "workqueue"
  max_msgs: -1, // Unlimited
  max_bytes: 10 * 1024 * 1024 * 1024, // 10 GB
  max_age: 7 * 24 * 60 * 60 * 1000000000, // 7 days (nanoseconds)
  storage: "file", // "file" | "memory"
  num_replicas: 3,
  discard: "old",
});

// Publish to JetStream (with acknowledgment)
const ack = await js.publish("orders.created", jc.encode({
  orderId: "order-123",
  total: 89.97,
}));

console.log(`Published: seq=${ack.seq}, stream=${ack.stream}`);

// Durable consumer — survives restarts
await jsm.consumers.add("ORDERS", {
  durable_name: "order-processor",
  deliver_policy: "all", // "all" | "last" | "new" | "by_start_sequence"
  ack_policy: "explicit",
  ack_wait: 30000000000, // 30s in nanoseconds
  max_deliver: 5, // Max redelivery attempts
  filter_subject: "orders.created",
});

// Consume messages
const consumer = await js.consumers.get("ORDERS", "order-processor");

const messages = await consumer.consume();
for await (const msg of messages) {
  try {
    const order = jc.decode(msg.data);
    await processOrder(order);
    msg.ack(); // Acknowledge successful processing
  } catch (error) {
    msg.nak(); // Negative ack — redelivery
  }
}

Key-Value Store and Object Store

// NATS KV — distributed key-value store built on JetStream
const kv = await js.views.kv("app-config", {
  history: 5, // Keep last 5 versions
  ttl: 0, // No expiry
});

// Put values
await kv.put("feature.dark-mode", jc.encode({ enabled: true, rollout: 0.5 }));
await kv.put("rate-limit.api", jc.encode({ rpm: 1000, burst: 50 }));

// Get values
const entry = await kv.get("feature.dark-mode");
const config = jc.decode(entry!.value);
console.log(`Dark mode: ${config.enabled}, rollout: ${config.rollout}`);

// Watch for changes (real-time config updates)
const watch = await kv.watch({ key: "feature.>" });
(async () => {
  for await (const entry of watch) {
    console.log(`Config changed: ${entry.key} = ${jc.decode(entry.value)}`);
    updateLocalConfig(entry.key, jc.decode(entry.value));
  }
})();

// Object store — large file storage on NATS
const os = await js.views.os("artifacts");

// Store a file
await os.put({ name: "model-v2.onnx" }, readableStream);

// Retrieve a file
const result = await os.get("model-v2.onnx");
const data = await result!.data;

Apache Kafka — Enterprise Event Streaming

Apache Kafka is the industry-standard distributed event streaming platform — billions of events per day, exactly-once semantics, and the richest ecosystem.

Producer with KafkaJS

import { Kafka, CompressionTypes, logLevel } from "kafkajs";

const kafka = new Kafka({
  clientId: "order-service",
  brokers: ["kafka-0:9092", "kafka-1:9092", "kafka-2:9092"],
  ssl: true,
  sasl: {
    mechanism: "scram-sha-256",
    username: process.env.KAFKA_USERNAME!,
    password: process.env.KAFKA_PASSWORD!,
  },
  logLevel: logLevel.WARN,
});

const producer = kafka.producer({
  idempotent: true,
  maxInFlightRequests: 5,
  transactionalId: "order-producer", // Enable transactions
});

await producer.connect();

// Transactional produce — exactly-once across topics
const transaction = await producer.transaction();
try {
  await transaction.send({
    topic: "orders",
    messages: [{ key: orderId, value: JSON.stringify(order) }],
  });

  await transaction.send({
    topic: "inventory-updates",
    messages: [{ key: order.sku, value: JSON.stringify({ delta: -order.qty }) }],
  });

  await transaction.sendOffsets({
    consumerGroupId: "order-processor",
    topics: [{ topic: "incoming-orders", partitions: [{ partition: 0, offset: "42" }] }],
  });

  await transaction.commit();
} catch (error) {
  await transaction.abort();
  throw error;
}

Consumer with Exactly-Once Processing

const consumer = kafka.consumer({
  groupId: "order-processor",
  readUncommitted: false, // Only read committed messages
  sessionTimeout: 30000,
  rebalanceTimeout: 60000,
});

await consumer.connect();
await consumer.subscribe({ topics: ["orders"], fromBeginning: false });

// Manual offset management for exactly-once
await consumer.run({
  autoCommit: false,
  eachMessage: async ({ topic, partition, message }) => {
    const order = JSON.parse(message.value!.toString());

    // Process with idempotency check
    const processed = await isAlreadyProcessed(message.offset, partition);
    if (processed) return;

    await processOrder(order);
    await markProcessed(message.offset, partition);

    // Commit offset after successful processing
    await consumer.commitOffsets([{
      topic,
      partition,
      offset: (BigInt(message.offset) + 1n).toString(),
    }]);
  },
});

// Seek to specific offset (replay events)
consumer.seek({
  topic: "orders",
  partition: 0,
  offset: "1000", // Replay from offset 1000
});

Admin Operations

const admin = kafka.admin();
await admin.connect();

// Create topic with configuration
await admin.createTopics({
  topics: [
    {
      topic: "events",
      numPartitions: 24,
      replicationFactor: 3,
      configEntries: [
        { name: "retention.ms", value: "604800000" },       // 7 days
        { name: "cleanup.policy", value: "compact,delete" }, // Log compaction
        { name: "min.insync.replicas", value: "2" },         // Durability
        { name: "compression.type", value: "zstd" },         // Compression
      ],
    },
  ],
});

// List consumer group offsets and lag
const offsets = await admin.fetchOffsets({ groupId: "order-processor" });
const topicOffsets = await admin.fetchTopicOffsets("orders");

for (const partition of offsets) {
  const latest = topicOffsets.find((t) => t.partition === partition.partition);
  const lag = BigInt(latest!.offset) - BigInt(partition.offset);
  console.log(`Partition ${partition.partition}: offset=${partition.offset}, lag=${lag}`);
}

// Alter consumer group offsets (reset to earliest)
await admin.setOffsets({
  groupId: "order-processor",
  topic: "orders",
  partitions: [
    { partition: 0, offset: "0" },
    { partition: 1, offset: "0" },
  ],
});

await admin.disconnect();

Kafka Connect (REST API)

// Kafka Connect — source and sink connectors
const CONNECT_URL = "http://connect:8083";

// Create a PostgreSQL CDC source connector (Debezium)
await fetch(`${CONNECT_URL}/connectors`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    name: "postgres-source",
    config: {
      "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
      "database.hostname": "postgres",
      "database.port": "5432",
      "database.user": "debezium",
      "database.password": process.env.DB_PASSWORD,
      "database.dbname": "app",
      "topic.prefix": "cdc",
      "table.include.list": "public.orders,public.users",
      "slot.name": "debezium_slot",
      "publication.name": "debezium_pub",
    },
  }),
});

// Create an Elasticsearch sink connector
await fetch(`${CONNECT_URL}/connectors`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    name: "elasticsearch-sink",
    config: {
      "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
      "topics": "cdc.public.orders",
      "connection.url": "http://elasticsearch:9200",
      "type.name": "_doc",
      "key.ignore": "false",
      "schema.ignore": "true",
    },
  }),
});

Feature Comparison

FeatureRedpandaNATSApache Kafka
LanguageC++GoJava/Scala
ProtocolKafka API (compatible)NATS protocolKafka protocol
DependenciesSingle binarySingle binaryJVM + ZooKeeper/KRaft
Latency (p99)~2ms~1ms~10-50ms
ThroughputMillions msg/sMillions msg/sMillions msg/s
Persistence✅ (built-in)JetStream (opt-in)✅ (default)
Exactly-Once✅ (Kafka-compatible)JetStream (ack-based)✅ (transactions)
Pub/SubTopic-basedSubject-based (wildcards)Topic-based
Request-Reply❌ (use separate topics)✅ (built-in)❌ (use separate topics)
Consumer Groups✅ (Kafka groups)Queue groups + durable✅ (consumer groups)
Schema Registry✅ (built-in)✅ (Confluent)
Stream ProcessingKafka Streams compatible❌ (external)Kafka Streams, ksqlDB
ConnectorsKafka Connect compatible❌ (custom)Kafka Connect (1000+)
Key-Value Store✅ (NATS KV)
Object Store✅ (NATS Object Store)
ReplicationRaft consensusRaft (JetStream)ISR replication
Multi-Tenancy✅ (ACLs)✅ (accounts)✅ (ACLs)
MonitoringBuilt-in consoleBuilt-in monitoringJMX + external tools
Cloud ManagedRedpanda CloudSynadia CloudConfluent, MSK, Aiven
LicenseBSL → Apache 2.0Apache 2.0Apache 2.0
Operational ComplexityLowVery lowHigh
Best ForKafka migrationMicroservices, edgeEnterprise streaming

When to Use Each

Choose Redpanda if:

  • You want Kafka compatibility without JVM operational burden
  • Lower latency matters (10x better p99 than Kafka)
  • You're migrating from Kafka and need drop-in compatibility
  • You want built-in Schema Registry and admin console
  • Simpler deployment (single binary, no ZooKeeper) is important

Choose NATS if:

  • Lightweight microservice communication is the primary use case
  • Request-reply pattern is important for your architecture
  • You need a built-in key-value store for configuration
  • Edge computing or IoT with minimal resource footprint matters
  • Pub/sub with wildcard subject routing fits your event model

Choose Apache Kafka if:

  • You need the full ecosystem (Kafka Streams, Connect, ksqlDB)
  • 1000+ pre-built connectors for sources and sinks are important
  • Exactly-once transactional processing across topics is required
  • Enterprise features, compliance, and Confluent support are deal requirements
  • You're building complex stream processing pipelines

Methodology

Feature comparison based on Redpanda v24.x, NATS v2.x with JetStream, and Apache Kafka 3.x documentation as of March 2026. Performance characteristics from published benchmarks. Code examples use KafkaJS for Redpanda/Kafka and nats.js for NATS. Evaluated on: operational simplicity, latency, throughput, feature set, ecosystem, and Node.js developer experience.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.