Skip to main content

How to Set Up Logging in Node.js: Pino vs Winston 2026

·PkgPulse Team
0

TL;DR

Pino for production. Winston for flexibility. Pino is 8x faster than Winston, outputs JSON natively, and is what you want in a production API handling thousands of requests. Winston wins when you need multiple transports, console pretty-printing during dev, or complex log transformation. Most teams end up with Pino in production and a custom dev transport for readability.

Key Takeaways

  • Pino: 8x faster — async logging, minimal serialization overhead, JSON by default
  • Winston: multi-transport — log to files, HTTP, Slack, databases simultaneously
  • Both support log levels: error, warn, info, http, debug, verbose, silly
  • Pino child loggers — per-request context propagation (request ID, user ID)
  • Production must-have: structured JSON logs + centralized aggregation (Datadog, Loki, CloudWatch)

Pino Setup

npm install pino pino-pretty

# pino-pretty: dev-only pretty printer
# Production: raw JSON → sent to log aggregator
// logger.ts
import pino from 'pino';

const isDev = process.env.NODE_ENV !== 'production';

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',  // error, warn, info, debug, trace
  ...(isDev && {
    transport: {
      target: 'pino-pretty',
      options: {
        colorize: true,
        translateTime: 'HH:MM:ss',
        ignore: 'pid,hostname',
      },
    },
  }),
});

export default logger;
// Basic usage
logger.info('Server started');
logger.info({ port: 3000 }, 'Server started on port');
logger.error({ err: error }, 'Database connection failed');
logger.debug({ userId: '123', action: 'login' }, 'User action');

// Output (JSON in production):
// {"level":30,"time":1700000000000,"msg":"Server started on port","port":3000}

// Output (pretty in dev):
// [10:30:00] INFO: Server started on port
//     port: 3000

Pino with Express — Request Logging

npm install pino-http
// app.ts
import express from 'express';
import pinoHttp from 'pino-http';
import logger from './logger';

const app = express();

// Attach request logger middleware
app.use(pinoHttp({
  logger,
  customLogLevel: (req, res, err) => {
    if (res.statusCode >= 500 || err) return 'error';
    if (res.statusCode >= 400) return 'warn';
    return 'info';
  },
  customSuccessMessage: (req, res) => {
    return `${req.method} ${req.url} ${res.statusCode}`;
  },
  serializers: {
    req: (req) => ({
      method: req.method,
      url: req.url,
      headers: { 'user-agent': req.headers['user-agent'] },
    }),
    res: (res) => ({
      statusCode: res.statusCode,
    }),
  },
}));

// Each request gets a child logger with req.log
app.get('/users/:id', (req, res) => {
  req.log.info({ userId: req.params.id }, 'Fetching user');
  // Logs include requestId automatically from pino-http
  res.json({ id: req.params.id });
});

Pino Child Loggers (Request Context)

// Child loggers propagate context to all child logs
const requestLogger = logger.child({
  requestId: '550e8400-e29b-41d4-a716-446655440000',
  userId: '123',
  environment: 'production',
});

requestLogger.info('Starting payment processing');
requestLogger.info({ amount: 49.99 }, 'Charge created');
requestLogger.error({ chargeId: 'ch_123' }, 'Payment failed');

// All three logs include: requestId, userId, environment
// Output:
// {"requestId":"550e...","userId":"123","environment":"production","msg":"Starting payment processing"}
// {"requestId":"550e...","userId":"123","amount":49.99,"msg":"Charge created"}

Winston Setup

npm install winston winston-daily-rotate-file
// logger.ts
import winston from 'winston';

const { combine, timestamp, printf, colorize, json } = winston.format;

const devFormat = combine(
  colorize(),
  timestamp({ format: 'HH:MM:ss' }),
  printf(({ level, message, timestamp, ...meta }) => {
    const metaStr = Object.keys(meta).length ? ` ${JSON.stringify(meta)}` : '';
    return `[${timestamp}] ${level}: ${message}${metaStr}`;
  })
);

const prodFormat = combine(
  timestamp(),
  json()  // Structured JSON for log aggregators
);

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: process.env.NODE_ENV === 'production' ? prodFormat : devFormat,
  transports: [
    new winston.transports.Console(),

    // File transport (production)
    ...(process.env.NODE_ENV === 'production' ? [
      new winston.transports.DailyRotateFile({
        filename: 'logs/error-%DATE%.log',
        datePattern: 'YYYY-MM-DD',
        level: 'error',
        maxSize: '20m',
        maxFiles: '14d',
      }),
      new winston.transports.DailyRotateFile({
        filename: 'logs/combined-%DATE%.log',
        datePattern: 'YYYY-MM-DD',
        maxSize: '20m',
        maxFiles: '7d',
      }),
    ] : []),
  ],
});

export default logger;

Winston Multi-Transport (Power Feature)

// Winston: log to multiple destinations simultaneously
import { Logtail } from '@logtail/winston';

const logger = winston.createLogger({
  transports: [
    new winston.transports.Console(),

    // Logtail / Better Stack
    new Logtail(process.env.LOGTAIL_TOKEN!),

    // HTTP transport (any webhook)
    new winston.transports.Http({
      host: 'logs.mycompany.com',
      port: 443,
      path: '/ingest',
      ssl: true,
    }),

    // File rotation
    new winston.transports.DailyRotateFile({
      filename: 'logs/app-%DATE%.log',
      datePattern: 'YYYY-MM-DD',
    }),
  ],

  exceptionHandlers: [
    new winston.transports.File({ filename: 'logs/exceptions.log' }),
  ],
  rejectionHandlers: [
    new winston.transports.File({ filename: 'logs/rejections.log' }),
  ],
});

Log Levels in Practice

// Use the right level — over-logging is as bad as under-logging

logger.error('Database connection failed', { error: err.message, stack: err.stack });
// → Always logged, triggers alerts, wakes people up at 3am

logger.warn('Rate limit approaching', { current: 850, limit: 1000 });
// → Logged in production, doesn't alert, should be reviewed

logger.info('User signed up', { userId: '123', plan: 'pro' });
// → Business events you want in production (sign ups, purchases, key actions)

logger.http('GET /api/users 200 45ms');
// → HTTP request logs (use pino-http, not manual)

logger.debug('Cache miss', { key: 'user:123', ttl: 3600 });
// → Dev/staging only — too noisy for production

logger.verbose('Processing step 3 of 7');
// → Detailed traces for debugging specific issues

// Rule of thumb for production:
// - error: things that require immediate action
// - warn: things that might become errors
// - info: important business events only (not every request)
// - debug: disabled in production

Log Aggregation Setup

// Production: ship logs to an aggregator
// Options: Datadog, Grafana Loki, AWS CloudWatch, Better Stack, Logtail

// Pino → Datadog
// In your start script or Docker CMD:
// node app.js | pino-datadog-transport
// Or use @datadog/pino-transport:

import pino from 'pino';

const logger = pino({
  level: 'info',
  transport: {
    targets: [
      // Dev: pretty print
      ...(process.env.NODE_ENV !== 'production' ? [{
        target: 'pino-pretty',
        options: { colorize: true },
      }] : []),
      // Production: Datadog
      ...(process.env.NODE_ENV === 'production' ? [{
        target: '@datadog/pino-transport',
        options: {
          ddClientConf: {
            authMethods: { apiKeyAuth: process.env.DD_API_KEY },
          },
          ddServerConf: { site: 'datadoghq.com' },
          service: 'my-api',
          env: process.env.NODE_ENV,
        },
      }] : []),
    ],
  },
});

Performance Comparison

# Benchmark: 1M log entries, Node.js 20

# Pino:
# ~680K logs/second (JSON mode)
# ~315K logs/second (with pino-pretty in dev)
# Memory: minimal — async write queue

# Winston:
# ~85K logs/second (console transport)
# ~120K logs/second (file transport)
# Memory: slightly higher from format transforms

# Why Pino is faster:
# 1. Async log writes (non-blocking)
# 2. JSON.stringify optimized per-schema
# 3. No transform pipeline overhead
# 4. Uses Worker threads for transports

# Winston's speed is fine for most apps (<100 req/s)
# At 1000+ req/s, Pino's async model prevents log I/O from
# becoming a bottleneck in your hot path

When to Choose

ScenarioPickReason
Production API (high traffic)Pino8x faster, async, zero overhead
Need multiple log destinationsWinstonBuilt-in multi-transport
Simple app, quick setupWinstonMore tutorials, more familiar
Per-request context (request ID)PinoChild loggers are first-class
Custom log transformationWinstonFormat pipeline is powerful
FastifyPinoBuilt-in, zero config
ExpressEitherBoth have middleware
Already using WinstonKeep itMigration cost isn't worth it

The Production Logging Mental Model

Setting up logging correctly the first time is significantly cheaper than retrofitting it after a production incident. The mental model that guides good logging setup: logs are operational data, not developer output. console.log calls are appropriate during development because their audience is the developer running the code interactively. Production logs are consumed by automated systems — log aggregation services, alerting pipelines, anomaly detection — and by engineers debugging issues after the fact, often under time pressure during an incident.

Automated consumption requires structure. An alerting pipeline that fires when error rate exceeds 1% of requests cannot parse unstructured log strings — it needs a field it can count. A log aggregation dashboard that groups errors by affected user cannot extract the user ID from a concatenated string — it needs a typed field. Structured JSON logging is the format that makes logs machine-readable by default, which is why both Pino and Winston default to JSON output in production configurations.

The second principle: logs should tell the story of your application's behavior without requiring access to the source code for interpretation. A log entry that says only "query failed" is useful only to someone who knows which query was running at that point in the code. A log entry that includes the query name, parameters, duration, error code, and retry count is useful to anyone — including the on-call engineer at 2am who was not the one who wrote that code path.

Why Structured Logging Is Non-Negotiable in Production

The shift from console.log("user logged in: " + userId) to structured JSON logging is one of the highest-leverage improvements a Node.js backend can make for operational observability. String concatenation produces log entries that are searchable only by text grep — useful locally, but nearly impossible to aggregate meaningfully across thousands of log entries in a log management system like Datadog, Grafana Loki, or AWS CloudWatch. Structured JSON logs, where every attribute is a typed field, enable filtering by exact value, building dashboards that aggregate log data by field, and setting alerts on specific field combinations.

Both Pino and Winston output structured JSON in production mode, but they approach it differently. Pino generates valid JSON on every log call with minimal overhead — the serialization is close to a direct JSON.stringify() call on a carefully structured object. Winston's json() format does the same but through a transform pipeline that allows middleware to add fields, transform values, and filter entries. For simple structured logging, both produce equivalent output. The difference shows at high volume where Pino's simpler serialization path avoids the overhead of Winston's transform chain.

Log Correlation and Distributed Tracing

In microservices architectures and modern monolithic applications with multiple async paths, logs from different services or different parts of the same service need to be correlated so you can reconstruct what happened during a specific request or user session. This is the requestId / traceId problem, and solving it requires propagating identifiers across async boundaries and including them in every log entry.

Pino's child logger approach is the cleanest solution for request-scoped logging. When a request arrives, you create a child logger with the request's ID as a bound field: req.log = logger.child({ requestId: generateId() }). Every subsequent log call using req.log automatically includes requestId without the developer explicitly adding it. This pattern composes well with OpenTelemetry trace context: you can include the current trace ID and span ID as child logger fields, creating a direct link between log entries and distributed traces in your observability backend.

Winston achieves similar correlation through custom format middleware or by storing a logger with request context in AsyncLocalStorage. The AsyncLocalStorage approach is particularly powerful because it propagates the request context automatically through all async operations without passing the logger explicitly through function parameters.

Log Level Strategy for Production Operations

Choosing the right log level for each event is as important as choosing the right logger. Over-logging at info level creates noise that makes it harder to find meaningful events. Under-logging leaves debugging evidence missing when you need it most. A practical log level strategy for production Node.js services:

Reserve error for conditions that require immediate attention and may have user impact — database connection failures, external API timeouts after all retries, authentication service outages. Error-level logs should correlate closely with on-call alerts. Reserve warn for conditions that are noteworthy but not immediately urgent — rate limit warnings, cache miss rates above threshold, degraded external service responses. info should capture meaningful business events: user signups, payment completions, job completions, significant state transitions. Avoid logging every HTTP request at info level in production — use http level for request logs, which can be filtered or disabled independently.

debug level should be disabled in production by default but recoverable without deployment. Setting the LOG_LEVEL environment variable to debug for a specific instance should enable verbose logging for diagnosis, without requiring a code change. This operability — the ability to temporarily increase log verbosity without deployment — is the primary reason environment-variable-controlled log levels matter.

Transports and Log Shipping

Application logs have no value sitting on disk on the server — they need to be shipped to a log aggregation system where they can be searched, aggregated, and alerted on. The transport layer handles this shipping, and the choice of transport determines the operational reliability of your logging infrastructure.

For Pino, the recommended production pattern is to write logs to stdout (the default) and let the process manager or container orchestrator handle log shipping. In Kubernetes, logs written to stdout are automatically collected by the cluster's log aggregation daemonset (Fluentd, Fluent Bit, Vector). This keeps your application code out of the log shipping business, which is correct — if your log transport fails, it should not crash your application. Pino's pino-transports API supports async transports that ship logs to external services (Datadog, Loki, Better Stack) without blocking the event loop.

Winston's multi-transport model allows logs to go to multiple destinations simultaneously — stdout, a file, and a cloud logging service — in a single configuration. This can be convenient but creates coupling between your application and its logging infrastructure. If one transport fails (HTTP transport to a logging service with a network outage), Winston's behavior depends on your error handling configuration for that transport. The cleaner architectural pattern is to write to stdout and handle log shipping externally, using Winston's multi-transport only when there is a specific reason to keep log shipping in the application layer.

Security Considerations: Avoiding Log Injection and PII Leakage

Application logs are attack surfaces in two directions. First, log injection: if user-controlled strings are included directly in log messages without sanitization, an attacker can craft input containing newline characters that create fake log entries or inject malicious content into log aggregation systems. Structured JSON logging largely prevents this — user input stored in a field value is JSON-escaped and cannot affect the log entry structure. However, if you interpolate user input into the log message string (not the structured fields), injection is possible.

Second, PII leakage: logs that include email addresses, user names, payment card numbers, or other personally identifiable information create compliance risks (GDPR, CCPA, HIPAA). Both Pino and Winston support serializer functions that transform objects before logging. A serializer for user objects that includes email and userId but excludes password, ssn, and paymentMethodId can be applied globally to prevent accidental PII logging. Sentry's beforeSend hook serves a similar purpose for error reporting. Applying field-level scrubbing in the logging infrastructure rather than at every call site is more reliable and easier to audit.

Compare Pino and Winston package health on PkgPulse.

See also: Bunyan vs Winston and Pino vs Winston in 2026: Node.js Logging Guide, Best Node.js Logging Libraries 2026.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.