Skip to main content

OpenTelemetry vs Sentry vs Datadog: Observability in Node.js (2026)

·PkgPulse Team

TL;DR

These three tools solve different (but overlapping) problems. OpenTelemetry is the vendor-neutral standard for instrumenting your code — it collects traces, metrics, and logs, then exports to any backend. Sentry excels at error monitoring and session replay — it's the best tool for understanding what went wrong from a user's perspective. Datadog is a full observability platform — infrastructure metrics, APM, logs, and synthetics in one dashboard. The right answer for most teams: use Sentry for errors + OpenTelemetry for traces sent to a preferred backend.

Key Takeaways

  • @opentelemetry/sdk-node: ~4.8M weekly downloads — vendor-neutral, standard instrumentation
  • @sentry/node: ~3.2M weekly downloads — best error tracking + session replay + user context
  • dd-trace (Datadog): ~1.8M weekly downloads — full APM stack, proprietary platform
  • OpenTelemetry ≠ a backend — it's instrumentation that exports to Jaeger, Tempo, Datadog, etc.
  • Sentry and Datadog are complete platforms with their own agents AND backends
  • Best combination: OpenTelemetry (traces/metrics) + Sentry (errors/sessions)

The Three Pillars of Observability

Observability Pillars:
  📊 Metrics — "How is the system performing overall?"
     e.g., request rate, error rate, p99 latency, CPU usage

  🔍 Traces — "Where did this specific request go slow?"
     e.g., distributed trace: API → DB → Cache → third-party service

  📋 Logs — "What exactly happened at this point in time?"
     e.g., structured log entries with correlation IDs

  🐛 Errors — "When did something break, and what was the user doing?"
     e.g., exception stack trace + user context + session replay

OpenTelemetry

OpenTelemetry (CNCF project) standardizes how you instrument code. It's vendor-neutral — you instrument once and export to any backend:

// instrumentation.ts — Set up BEFORE importing anything else
import { NodeSDK } from "@opentelemetry/sdk-node"
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"
import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-http"
import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics"
import { Resource } from "@opentelemetry/resources"
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_VERSION } from "@opentelemetry/semantic-conventions"

const sdk = new NodeSDK({
  resource: new Resource({
    [SEMRESATTRS_SERVICE_NAME]: "pkgpulse-api",
    [SEMRESATTRS_SERVICE_VERSION]: process.env.APP_VERSION ?? "unknown",
  }),

  // Export traces to Jaeger, Tempo, Datadog, Honeycomb, etc.:
  traceExporter: new OTLPTraceExporter({
    url: process.env.OTEL_EXPORTER_OTLP_TRACES_ENDPOINT ?? "http://localhost:4318/v1/traces",
    headers: {
      Authorization: `Bearer ${process.env.OTEL_EXPORTER_OTLP_TOKEN}`,
    },
  }),

  // Export metrics:
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: "http://localhost:4318/v1/metrics",
    }),
    exportIntervalMillis: 30000,
  }),

  // Auto-instrument common libraries:
  instrumentations: [
    getNodeAutoInstrumentations({
      "@opentelemetry/instrumentation-http": {
        ignoreIncomingRequestHook: (req) => req.url?.includes("/health"),
      },
      "@opentelemetry/instrumentation-express": { enabled: true },
      "@opentelemetry/instrumentation-pg": { enabled: true },
      "@opentelemetry/instrumentation-redis": { enabled: true },
      "@opentelemetry/instrumentation-fetch": { enabled: true },
    }),
  ],
})

sdk.start()

process.on("SIGTERM", () => sdk.shutdown())

Auto-instrumentation covers:

  • HTTP/HTTPS (incoming + outgoing requests)
  • Express, Fastify, Hono, Koa, NestJS
  • PostgreSQL, MySQL, MongoDB, Redis
  • fetch, axios, got, node-http
  • AWS SDK, gRPC, GraphQL, Prisma

Manual spans for custom operations:

import { trace, context, propagation, SpanStatusCode } from "@opentelemetry/api"

const tracer = trace.getTracer("pkgpulse-service")

async function fetchPackageDownloads(packageName: string) {
  return tracer.startActiveSpan("fetch_package_downloads", async (span) => {
    span.setAttributes({
      "package.name": packageName,
      "data.source": "npm_registry",
    })

    try {
      const data = await npmRegistryClient.getDownloads(packageName)

      span.setAttributes({
        "package.downloads": data.weekly,
        "cache.hit": false,
      })

      return data
    } catch (error) {
      span.recordException(error as Error)
      span.setStatus({ code: SpanStatusCode.ERROR, message: (error as Error).message })
      throw error
    } finally {
      span.end()
    }
  })
}

Custom metrics:

import { metrics } from "@opentelemetry/api"

const meter = metrics.getMeter("pkgpulse-metrics")

// Counter:
const searchCounter = meter.createCounter("search_requests", {
  description: "Number of package search requests",
})

// Histogram (for latency):
const dbQueryHistogram = meter.createHistogram("db_query_duration", {
  description: "DB query duration in milliseconds",
  unit: "ms",
})

// Observable gauge (sampled periodically):
const connectionPoolGauge = meter.createObservableGauge("db_connection_pool_size", {
  description: "Current DB connection pool size",
})
connectionPoolGauge.addCallback((result) => {
  result.observe(db.pool.size, { pool: "primary" })
  result.observe(db.readPool.size, { pool: "replica" })
})

// Usage in route handlers:
app.get("/search", async (req, res) => {
  searchCounter.add(1, { query_type: "fuzzy" })

  const start = Date.now()
  const results = await searchPackages(req.query.q)
  dbQueryHistogram.record(Date.now() - start, { query: "search" })

  res.json(results)
})

Sentry

Sentry is purpose-built for error monitoring with user context:

import * as Sentry from "@sentry/node"
import { nodeProfilingIntegration } from "@sentry/profiling-node"

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.APP_VERSION,
  integrations: [
    nodeProfilingIntegration(),      // CPU profiling
    Sentry.httpIntegration(),        // HTTP request tracking
    Sentry.expressIntegration(),     // Express instrumentation
    Sentry.postgresIntegration(),    // PostgreSQL query tracking
    Sentry.redisIntegration(),       // Redis tracking
  ],
  tracesSampleRate: process.env.NODE_ENV === "production" ? 0.1 : 1.0,
  profilesSampleRate: 0.1,
  beforeSend: (event) => {
    // Scrub PII before sending:
    if (event.request?.cookies) {
      delete event.request.cookies
    }
    return event
  },
})

// Add user context to all errors:
Sentry.setUser({
  id: session.userId,
  email: session.email,
  ip_address: req.ip,
})

// Custom error capturing:
try {
  await riskyOperation()
} catch (error) {
  Sentry.captureException(error, {
    tags: { operation: "package_publish", package: packageName },
    extra: { packageData: sanitizedPackageData },
    level: "error",
  })
}

// Manual transaction for performance:
const transaction = Sentry.startTransaction({
  name: "package-comparison",
  op: "task",
})
Sentry.getCurrentHub().configureScope((scope) => scope.setSpan(transaction))

const span = transaction.startChild({ op: "db", description: "fetch packages" })
const packages = await db.getPackages(names)
span.finish()

transaction.finish()

Sentry's unique capabilities:

// Breadcrumbs — trace what happened before an error:
Sentry.addBreadcrumb({
  category: "navigation",
  message: `User navigated to /compare/${pkg1}-vs-${pkg2}`,
  level: "info",
})

Sentry.addBreadcrumb({
  category: "fetch",
  message: `Fetched npm data for ${pkg1}`,
  data: { status: 200, duration: 234 },
})

// Error grouping with fingerprinting:
Sentry.captureException(error, {
  fingerprint: ["{{ default }}", "database-connection-error"],
})

// Issue alerts and notifications are configured in Sentry UI

Sentry React SDK (session replay):

// @sentry/react adds session replay — records what user did before error:
import * as Sentry from "@sentry/react"

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  integrations: [
    Sentry.replayIntegration({
      maskAllText: false,  // Only mask sensitive fields
      blockAllMedia: false,
      maskAllInputs: true,  // Mask all form inputs
    }),
  ],
  replaysSessionSampleRate: 0.1,    // Sample 10% of sessions
  replaysOnErrorSampleRate: 1.0,    // Capture 100% of error sessions
})

Datadog (dd-trace)

Datadog's APM is a full observability platform:

// Must be first import — before anything else:
import tracer from "dd-trace"

tracer.init({
  service: "pkgpulse-api",
  env: process.env.NODE_ENV,
  version: process.env.APP_VERSION,
  profiling: true,          // Enable continuous profiling
  runtimeMetrics: true,     // Node.js runtime metrics (heap, GC, event loop)
  logInjection: true,       // Add trace IDs to logs automatically
  analytics: true,          // Enable App Analytics
})

// dd-trace auto-instruments: http, express, pg, redis, mongoose,
// elasticsearch, grpc, kafkajs, aws-sdk, and 100+ more

// Manual spans:
const span = tracer.startSpan("package.score.calculate", {
  tags: {
    "package.name": packageName,
    "score.type": "health",
  },
})

try {
  const score = await calculateHealthScore(packageName)
  span.setTag("score.result", score)
  return score
} catch (error) {
  span.setTag("error", error)
  throw error
} finally {
  span.finish()
}

// Custom metrics (sent to Datadog StatsD agent):
const StatsD = require("hot-shots")
const dogstatsd = new StatsD()

dogstatsd.increment("package.search.count", 1, { query_type: "exact" })
dogstatsd.histogram("package.compare.duration", durationMs)
dogstatsd.gauge("cache.hit.rate", hitRate)

Datadog vs OpenTelemetry:

// Datadog can RECEIVE OpenTelemetry data:
// Set OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
// And Datadog Agent accepts OTLP — best of both worlds:

// Use OpenTelemetry for instrumentation → send to Datadog Agent → Datadog platform
const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces",  // Datadog Agent OTLP endpoint
  }),
  instrumentations: [getNodeAutoInstrumentations()],
})

Cost Comparison

ToolFree TierPaidNotes
Sentry5K errors/month, 1 member$26/mo+Generous free tier for small projects
DatadogNo meaningful free tier$15/host/month+Expensive at scale
OpenTelemetryN/A (instrumentation only)Depends on backendFree to instrument; backend has cost
Grafana Tempo (OTel backend)Self-hosted free$8+/month cloudFree if self-hosting
Honeycomb20M events/month free$130/month+OTel-native, excellent DX
JaegerSelf-hosted freeN/AOpen-source backend

Feature Comparison

FeatureOpenTelemetrySentryDatadog
Distributed tracing
Error monitoring❌ (via logs)✅ Excellent
Session replay
Metrics✅ Basic✅ Excellent
Log management
Vendor lock-in❌ None⚠️ Sentry platform✅ Proprietary
Self-hostable✅ (OSS backends)✅ Sentry.io OSS
Auto-instrumentation
TypeScript
CostBackend-dependent$0-26+/mo$15+/host/month

For most production Node.js applications in 2026:

// Combine both for best coverage:

// 1. Sentry for errors + user context (front + back):
Sentry.init({ dsn: "...", tracesSampleRate: 0.1 })

// 2. OpenTelemetry for traces + metrics → send to Grafana Cloud or Honeycomb:
const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({ url: process.env.OTLP_ENDPOINT }),
  instrumentations: [getNodeAutoInstrumentations()],
})

// This gives you:
// - Sentry: Error alerts, user impact, session replay, release tracking
// - OTel → Grafana/Honeycomb: Trace visualization, performance analysis, metrics

Methodology

Download data from npm registry (weekly average, February 2026). Cost data from official pricing pages (March 2026). Feature comparison based on @opentelemetry/sdk-node 1.x, @sentry/node 8.x, and dd-trace 5.x.

Compare observability package downloads on PkgPulse →

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.