Skip to main content

Guide

OpenTelemetry vs Sentry vs Datadog 2026

Compare OpenTelemetry, Sentry, and Datadog SDK for observability in Node.js. Tracing, error monitoring, metrics, logs, and which observability tool to use in.

·PkgPulse Team·
0

TL;DR

These three tools solve different (but overlapping) problems. OpenTelemetry is the vendor-neutral standard for instrumenting your code — it collects traces, metrics, and logs, then exports to any backend. Sentry excels at error monitoring and session replay — it's the best tool for understanding what went wrong from a user's perspective. Datadog is a full observability platform — infrastructure metrics, APM, logs, and synthetics in one dashboard. The right answer for most teams: use Sentry for errors + OpenTelemetry for traces sent to a preferred backend.

Key Takeaways

  • @opentelemetry/sdk-node: ~4.8M weekly downloads — vendor-neutral, standard instrumentation
  • @sentry/node: ~3.2M weekly downloads — best error tracking + session replay + user context
  • dd-trace (Datadog): ~1.8M weekly downloads — full APM stack, proprietary platform
  • OpenTelemetry ≠ a backend — it's instrumentation that exports to Jaeger, Tempo, Datadog, etc.
  • Sentry and Datadog are complete platforms with their own agents AND backends
  • Best combination: OpenTelemetry (traces/metrics) + Sentry (errors/sessions)

The Three Pillars of Observability

Observability Pillars:
  📊 Metrics — "How is the system performing overall?"
     e.g., request rate, error rate, p99 latency, CPU usage

  🔍 Traces — "Where did this specific request go slow?"
     e.g., distributed trace: API → DB → Cache → third-party service

  📋 Logs — "What exactly happened at this point in time?"
     e.g., structured log entries with correlation IDs

  🐛 Errors — "When did something break, and what was the user doing?"
     e.g., exception stack trace + user context + session replay

OpenTelemetry

OpenTelemetry (CNCF project) standardizes how you instrument code. It's vendor-neutral — you instrument once and export to any backend:

// instrumentation.ts — Set up BEFORE importing anything else
import { NodeSDK } from "@opentelemetry/sdk-node"
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"
import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-http"
import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics"
import { Resource } from "@opentelemetry/resources"
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_VERSION } from "@opentelemetry/semantic-conventions"

const sdk = new NodeSDK({
  resource: new Resource({
    [SEMRESATTRS_SERVICE_NAME]: "pkgpulse-api",
    [SEMRESATTRS_SERVICE_VERSION]: process.env.APP_VERSION ?? "unknown",
  }),

  // Export traces to Jaeger, Tempo, Datadog, Honeycomb, etc.:
  traceExporter: new OTLPTraceExporter({
    url: process.env.OTEL_EXPORTER_OTLP_TRACES_ENDPOINT ?? "http://localhost:4318/v1/traces",
    headers: {
      Authorization: `Bearer ${process.env.OTEL_EXPORTER_OTLP_TOKEN}`,
    },
  }),

  // Export metrics:
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: "http://localhost:4318/v1/metrics",
    }),
    exportIntervalMillis: 30000,
  }),

  // Auto-instrument common libraries:
  instrumentations: [
    getNodeAutoInstrumentations({
      "@opentelemetry/instrumentation-http": {
        ignoreIncomingRequestHook: (req) => req.url?.includes("/health"),
      },
      "@opentelemetry/instrumentation-express": { enabled: true },
      "@opentelemetry/instrumentation-pg": { enabled: true },
      "@opentelemetry/instrumentation-redis": { enabled: true },
      "@opentelemetry/instrumentation-fetch": { enabled: true },
    }),
  ],
})

sdk.start()

process.on("SIGTERM", () => sdk.shutdown())

Auto-instrumentation covers:

  • HTTP/HTTPS (incoming + outgoing requests)
  • Express, Fastify, Hono, Koa, NestJS
  • PostgreSQL, MySQL, MongoDB, Redis
  • fetch, axios, got, node-http
  • AWS SDK, gRPC, GraphQL, Prisma

Manual spans for custom operations:

import { trace, context, propagation, SpanStatusCode } from "@opentelemetry/api"

const tracer = trace.getTracer("pkgpulse-service")

async function fetchPackageDownloads(packageName: string) {
  return tracer.startActiveSpan("fetch_package_downloads", async (span) => {
    span.setAttributes({
      "package.name": packageName,
      "data.source": "npm_registry",
    })

    try {
      const data = await npmRegistryClient.getDownloads(packageName)

      span.setAttributes({
        "package.downloads": data.weekly,
        "cache.hit": false,
      })

      return data
    } catch (error) {
      span.recordException(error as Error)
      span.setStatus({ code: SpanStatusCode.ERROR, message: (error as Error).message })
      throw error
    } finally {
      span.end()
    }
  })
}

Custom metrics:

import { metrics } from "@opentelemetry/api"

const meter = metrics.getMeter("pkgpulse-metrics")

// Counter:
const searchCounter = meter.createCounter("search_requests", {
  description: "Number of package search requests",
})

// Histogram (for latency):
const dbQueryHistogram = meter.createHistogram("db_query_duration", {
  description: "DB query duration in milliseconds",
  unit: "ms",
})

// Observable gauge (sampled periodically):
const connectionPoolGauge = meter.createObservableGauge("db_connection_pool_size", {
  description: "Current DB connection pool size",
})
connectionPoolGauge.addCallback((result) => {
  result.observe(db.pool.size, { pool: "primary" })
  result.observe(db.readPool.size, { pool: "replica" })
})

// Usage in route handlers:
app.get("/search", async (req, res) => {
  searchCounter.add(1, { query_type: "fuzzy" })

  const start = Date.now()
  const results = await searchPackages(req.query.q)
  dbQueryHistogram.record(Date.now() - start, { query: "search" })

  res.json(results)
})

Sentry

Sentry is purpose-built for error monitoring with user context:

import * as Sentry from "@sentry/node"
import { nodeProfilingIntegration } from "@sentry/profiling-node"

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.APP_VERSION,
  integrations: [
    nodeProfilingIntegration(),      // CPU profiling
    Sentry.httpIntegration(),        // HTTP request tracking
    Sentry.expressIntegration(),     // Express instrumentation
    Sentry.postgresIntegration(),    // PostgreSQL query tracking
    Sentry.redisIntegration(),       // Redis tracking
  ],
  tracesSampleRate: process.env.NODE_ENV === "production" ? 0.1 : 1.0,
  profilesSampleRate: 0.1,
  beforeSend: (event) => {
    // Scrub PII before sending:
    if (event.request?.cookies) {
      delete event.request.cookies
    }
    return event
  },
})

// Add user context to all errors:
Sentry.setUser({
  id: session.userId,
  email: session.email,
  ip_address: req.ip,
})

// Custom error capturing:
try {
  await riskyOperation()
} catch (error) {
  Sentry.captureException(error, {
    tags: { operation: "package_publish", package: packageName },
    extra: { packageData: sanitizedPackageData },
    level: "error",
  })
}

// Manual transaction for performance:
const transaction = Sentry.startTransaction({
  name: "package-comparison",
  op: "task",
})
Sentry.getCurrentHub().configureScope((scope) => scope.setSpan(transaction))

const span = transaction.startChild({ op: "db", description: "fetch packages" })
const packages = await db.getPackages(names)
span.finish()

transaction.finish()

Sentry's unique capabilities:

// Breadcrumbs — trace what happened before an error:
Sentry.addBreadcrumb({
  category: "navigation",
  message: `User navigated to /compare/${pkg1}-vs-${pkg2}`,
  level: "info",
})

Sentry.addBreadcrumb({
  category: "fetch",
  message: `Fetched npm data for ${pkg1}`,
  data: { status: 200, duration: 234 },
})

// Error grouping with fingerprinting:
Sentry.captureException(error, {
  fingerprint: ["{{ default }}", "database-connection-error"],
})

// Issue alerts and notifications are configured in Sentry UI

Sentry React SDK (session replay):

// @sentry/react adds session replay — records what user did before error:
import * as Sentry from "@sentry/react"

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  integrations: [
    Sentry.replayIntegration({
      maskAllText: false,  // Only mask sensitive fields
      blockAllMedia: false,
      maskAllInputs: true,  // Mask all form inputs
    }),
  ],
  replaysSessionSampleRate: 0.1,    // Sample 10% of sessions
  replaysOnErrorSampleRate: 1.0,    // Capture 100% of error sessions
})

Datadog (dd-trace)

Datadog's APM is a full observability platform:

// Must be first import — before anything else:
import tracer from "dd-trace"

tracer.init({
  service: "pkgpulse-api",
  env: process.env.NODE_ENV,
  version: process.env.APP_VERSION,
  profiling: true,          // Enable continuous profiling
  runtimeMetrics: true,     // Node.js runtime metrics (heap, GC, event loop)
  logInjection: true,       // Add trace IDs to logs automatically
  analytics: true,          // Enable App Analytics
})

// dd-trace auto-instruments: http, express, pg, redis, mongoose,
// elasticsearch, grpc, kafkajs, aws-sdk, and 100+ more

// Manual spans:
const span = tracer.startSpan("package.score.calculate", {
  tags: {
    "package.name": packageName,
    "score.type": "health",
  },
})

try {
  const score = await calculateHealthScore(packageName)
  span.setTag("score.result", score)
  return score
} catch (error) {
  span.setTag("error", error)
  throw error
} finally {
  span.finish()
}

// Custom metrics (sent to Datadog StatsD agent):
const StatsD = require("hot-shots")
const dogstatsd = new StatsD()

dogstatsd.increment("package.search.count", 1, { query_type: "exact" })
dogstatsd.histogram("package.compare.duration", durationMs)
dogstatsd.gauge("cache.hit.rate", hitRate)

Datadog vs OpenTelemetry:

// Datadog can RECEIVE OpenTelemetry data:
// Set OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
// And Datadog Agent accepts OTLP — best of both worlds:

// Use OpenTelemetry for instrumentation → send to Datadog Agent → Datadog platform
const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces",  // Datadog Agent OTLP endpoint
  }),
  instrumentations: [getNodeAutoInstrumentations()],
})

Cost Comparison

ToolFree TierPaidNotes
Sentry5K errors/month, 1 member$26/mo+Generous free tier for small projects
DatadogNo meaningful free tier$15/host/month+Expensive at scale
OpenTelemetryN/A (instrumentation only)Depends on backendFree to instrument; backend has cost
Grafana Tempo (OTel backend)Self-hosted free$8+/month cloudFree if self-hosting
Honeycomb20M events/month free$130/month+OTel-native, excellent DX
JaegerSelf-hosted freeN/AOpen-source backend

Feature Comparison

FeatureOpenTelemetrySentryDatadog
Distributed tracing
Error monitoring❌ (via logs)✅ Excellent
Session replay
Metrics✅ Basic✅ Excellent
Log management
Vendor lock-in❌ None⚠️ Sentry platform✅ Proprietary
Self-hostable✅ (OSS backends)✅ Sentry.io OSS
Auto-instrumentation
TypeScript
CostBackend-dependent$0-26+/mo$15+/host/month

For most production Node.js applications in 2026:

// Combine both for best coverage:

// 1. Sentry for errors + user context (front + back):
Sentry.init({ dsn: "...", tracesSampleRate: 0.1 })

// 2. OpenTelemetry for traces + metrics → send to Grafana Cloud or Honeycomb:
const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({ url: process.env.OTLP_ENDPOINT }),
  instrumentations: [getNodeAutoInstrumentations()],
})

// This gives you:
// - Sentry: Error alerts, user impact, session replay, release tracking
// - OTel → Grafana/Honeycomb: Trace visualization, performance analysis, metrics

Choosing the Right Observability Stack for Your Team

The most common mistake teams make is treating observability tools as interchangeable — picking one and expecting it to cover all three pillars equally well. In practice, OpenTelemetry, Sentry, and Datadog each excel in different dimensions, and the strongest production setups combine at least two of them. OpenTelemetry is not a platform at all: it is a standardization layer that defines how instrumentation data is collected and exported. You still need a backend — whether that is Grafana Tempo, Jaeger, Honeycomb, or Datadog itself — to store, query, and visualize the telemetry. Teams that deploy OpenTelemetry without planning their backend architecture quickly discover that raw OTLP data without a query interface is nearly useless.

Sentry occupies a unique space that neither OpenTelemetry nor Datadog fully replaces: user-centric error intelligence. When a JavaScript exception fires in production, Sentry captures the stack trace, the user's session replay showing what they were doing for the 30 seconds before the crash, the sequence of breadcrumbs through the application, and the release version. This combination makes it the fastest way to reproduce and prioritize bugs from a user impact perspective. A 5xx error in your distributed trace tells you something broke; a Sentry issue with session replay tells you why the user was confused and what sequence of actions triggered it.

Production Sampling and Cost Optimization

Distributed tracing at scale generates enormous data volumes. A service handling ten thousand requests per second would produce ten thousand trace entries per second if you naively traced everything, which is neither economically feasible nor practically useful — most traces look identical. Both OpenTelemetry and Datadog support sampling strategies to bring volumes down to manageable levels.

OpenTelemetry's SDK supports head-based sampling (the decision is made at the start of a request) and tail-based sampling (the decision is made after the request completes, allowing you to always keep slow or errored traces). The TraceIdRatioBased sampler is the most common: set tracesSampleRate: 0.1 to keep 10% of requests. For production APIs with consistent latency, 1-5% sampling captures enough data to identify p99 outliers while keeping egress costs reasonable. Sentry's tracesSampleRate option works similarly for its performance monitoring features, and setting it to 0.1 in production while using 1.0 in development is a standard pattern.

Datadog's Adaptive Retention algorithm automatically promotes traces with errors, high latency, or rare operation patterns regardless of the base sample rate — this "intelligent retention" is one of Datadog's most practical enterprise features and justifies its cost at scale.

TypeScript Integration and Developer Experience

All three tools have strong TypeScript support, but the quality of the developer experience during instrumentation differs. OpenTelemetry's @opentelemetry/api package is typed, but the API is relatively verbose because it was designed to be language-agnostic — it does not take advantage of TypeScript-specific patterns like discriminated unions or inferred generics. The span.setAttributes() method accepts SpanAttributes which is essentially Record<string, AttributeValue>, so TypeScript won't prevent you from setting attributes with inconsistent types across call sites.

Sentry's SDK has excellent TypeScript support, including typed event hints, typed captureException extras, and typed Scope mutations. The beforeSend hook is properly typed so TypeScript validates that you return the correct shape. Datadog's dd-trace has adequate TypeScript definitions but ships with @types/dd-trace as a separate package, and the typings for the StatsD client (hot-shots) require a separate @types/hot-shots install.

For new TypeScript projects, the practical recommendation is to write your observability abstraction layer in terms of OpenTelemetry's API package — which is stable, well-typed, and vendor-neutral — and use Sentry for the error capture layer where its user-context model is irreplaceable.

Self-Hosting vs Managed Services

One of the most significant strategic decisions in observability is whether to run your own backend infrastructure or pay for a managed platform. OpenTelemetry makes this choice explicit — the SDK is agnostic, and you choose the backend separately. Jaeger and Grafana Tempo are the two most commonly self-hosted OpenTelemetry backends: Jaeger is simpler and better for getting started, while Tempo scales better and integrates with Grafana Loki (logs) and Grafana Mimir (metrics) for a complete self-hosted stack.

Datadog explicitly does not support self-hosting. The Datadog Agent that runs on your servers or as a Kubernetes DaemonSet ships data to Datadog's cloud infrastructure. This simplifies deployment but creates vendor lock-in and a cost structure that scales with your infrastructure size. At small scale (5-10 hosts), Datadog's cost is manageable; at 100+ hosts, the bill becomes a material line item in the infrastructure budget.

Sentry occupies a middle ground: the Sentry product is open-source (sentry.io runs on the Sentry codebase), and self-hosting is officially supported. However, self-hosted Sentry is a heavy deployment — it requires Postgres, Redis, Kafka, Celery workers, and the Snuba storage layer. For teams with fewer than 50 developers, the operational overhead of self-hosting Sentry rarely justifies the cost savings over their hosted tiers.

Migration Paths and Vendor Portability

The most powerful long-term argument for OpenTelemetry is vendor portability. If you instrument your application once with OpenTelemetry, changing your backend is a configuration change rather than a code change. Teams that started with Jaeger can switch to Honeycomb by changing OTEL_EXPORTER_OTLP_ENDPOINT. Teams that want to try Datadog alongside their existing Grafana setup can run two exporters simultaneously.

This portability matters most when your observability needs evolve. Early-stage startups often start with Grafana Cloud's free tier (50GB traces/month free), then migrate to a paid tier or self-hosted stack as volume grows. Without OpenTelemetry, this migration means re-instrumenting every service. With OpenTelemetry, it means updating a configuration file. The upfront investment in writing standards-compliant instrumentation pays dividends when your infrastructure strategy changes.

Sentry and Datadog do not offer this portability by design. Their agents are proprietary, their data formats are proprietary, and migrating off either platform requires re-instrumentation. This is a reasonable trade-off for teams that prioritize the polish of their respective platforms, but it should be a conscious decision rather than an accidental one.

When Datadog Makes Economic Sense

Despite Datadog's reputation as the expensive enterprise choice, there are scenarios where it is genuinely the most economical option when you factor in engineering time. Datadog's automatic infrastructure correlation — linking a slow API trace to a spike in CPU on the specific EC2 instance that handled it — requires zero additional configuration when using the Datadog Agent. Reproducing this correlation manually in a self-hosted OpenTelemetry stack requires configuring infrastructure metrics collection, correlating service names between your APM traces and your infrastructure metrics, and building Grafana dashboards that join data from multiple sources.

For teams with dedicated DevOps or SRE engineers who enjoy building observability infrastructure, the self-hosted route is rewarding and economical. For product-focused startups where every engineer needs to focus on the product, paying Datadog to handle the correlation and alerting is often the correct economic choice even at their premium pricing.

Methodology

Download data from npm registry (weekly average, February 2026). Cost data from official pricing pages (March 2026). Feature comparison based on @opentelemetry/sdk-node 1.x, @sentry/node 8.x, and dd-trace 5.x.

Compare observability package downloads on PkgPulse →

See also: Lit vs Svelte and cac vs meow vs arg 2026, acorn vs @babel/parser vs espree.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.