Skip to main content

How to Set Up Logging in Node.js: Pino vs Winston

·PkgPulse Team

TL;DR

Pino for production. Winston for flexibility. Pino is 8x faster than Winston, outputs JSON natively, and is what you want in a production API handling thousands of requests. Winston wins when you need multiple transports, console pretty-printing during dev, or complex log transformation. Most teams end up with Pino in production and a custom dev transport for readability.

Key Takeaways

  • Pino: 8x faster — async logging, minimal serialization overhead, JSON by default
  • Winston: multi-transport — log to files, HTTP, Slack, databases simultaneously
  • Both support log levels: error, warn, info, http, debug, verbose, silly
  • Pino child loggers — per-request context propagation (request ID, user ID)
  • Production must-have: structured JSON logs + centralized aggregation (Datadog, Loki, CloudWatch)

Pino Setup

npm install pino pino-pretty

# pino-pretty: dev-only pretty printer
# Production: raw JSON → sent to log aggregator
// logger.ts
import pino from 'pino';

const isDev = process.env.NODE_ENV !== 'production';

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',  // error, warn, info, debug, trace
  ...(isDev && {
    transport: {
      target: 'pino-pretty',
      options: {
        colorize: true,
        translateTime: 'HH:MM:ss',
        ignore: 'pid,hostname',
      },
    },
  }),
});

export default logger;
// Basic usage
logger.info('Server started');
logger.info({ port: 3000 }, 'Server started on port');
logger.error({ err: error }, 'Database connection failed');
logger.debug({ userId: '123', action: 'login' }, 'User action');

// Output (JSON in production):
// {"level":30,"time":1700000000000,"msg":"Server started on port","port":3000}

// Output (pretty in dev):
// [10:30:00] INFO: Server started on port
//     port: 3000

Pino with Express — Request Logging

npm install pino-http
// app.ts
import express from 'express';
import pinoHttp from 'pino-http';
import logger from './logger';

const app = express();

// Attach request logger middleware
app.use(pinoHttp({
  logger,
  customLogLevel: (req, res, err) => {
    if (res.statusCode >= 500 || err) return 'error';
    if (res.statusCode >= 400) return 'warn';
    return 'info';
  },
  customSuccessMessage: (req, res) => {
    return `${req.method} ${req.url} ${res.statusCode}`;
  },
  serializers: {
    req: (req) => ({
      method: req.method,
      url: req.url,
      headers: { 'user-agent': req.headers['user-agent'] },
    }),
    res: (res) => ({
      statusCode: res.statusCode,
    }),
  },
}));

// Each request gets a child logger with req.log
app.get('/users/:id', (req, res) => {
  req.log.info({ userId: req.params.id }, 'Fetching user');
  // Logs include requestId automatically from pino-http
  res.json({ id: req.params.id });
});

Pino Child Loggers (Request Context)

// Child loggers propagate context to all child logs
const requestLogger = logger.child({
  requestId: '550e8400-e29b-41d4-a716-446655440000',
  userId: '123',
  environment: 'production',
});

requestLogger.info('Starting payment processing');
requestLogger.info({ amount: 49.99 }, 'Charge created');
requestLogger.error({ chargeId: 'ch_123' }, 'Payment failed');

// All three logs include: requestId, userId, environment
// Output:
// {"requestId":"550e...","userId":"123","environment":"production","msg":"Starting payment processing"}
// {"requestId":"550e...","userId":"123","amount":49.99,"msg":"Charge created"}

Winston Setup

npm install winston winston-daily-rotate-file
// logger.ts
import winston from 'winston';

const { combine, timestamp, printf, colorize, json } = winston.format;

const devFormat = combine(
  colorize(),
  timestamp({ format: 'HH:MM:ss' }),
  printf(({ level, message, timestamp, ...meta }) => {
    const metaStr = Object.keys(meta).length ? ` ${JSON.stringify(meta)}` : '';
    return `[${timestamp}] ${level}: ${message}${metaStr}`;
  })
);

const prodFormat = combine(
  timestamp(),
  json()  // Structured JSON for log aggregators
);

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: process.env.NODE_ENV === 'production' ? prodFormat : devFormat,
  transports: [
    new winston.transports.Console(),

    // File transport (production)
    ...(process.env.NODE_ENV === 'production' ? [
      new winston.transports.DailyRotateFile({
        filename: 'logs/error-%DATE%.log',
        datePattern: 'YYYY-MM-DD',
        level: 'error',
        maxSize: '20m',
        maxFiles: '14d',
      }),
      new winston.transports.DailyRotateFile({
        filename: 'logs/combined-%DATE%.log',
        datePattern: 'YYYY-MM-DD',
        maxSize: '20m',
        maxFiles: '7d',
      }),
    ] : []),
  ],
});

export default logger;

Winston Multi-Transport (Power Feature)

// Winston: log to multiple destinations simultaneously
import { Logtail } from '@logtail/winston';

const logger = winston.createLogger({
  transports: [
    new winston.transports.Console(),

    // Logtail / Better Stack
    new Logtail(process.env.LOGTAIL_TOKEN!),

    // HTTP transport (any webhook)
    new winston.transports.Http({
      host: 'logs.mycompany.com',
      port: 443,
      path: '/ingest',
      ssl: true,
    }),

    // File rotation
    new winston.transports.DailyRotateFile({
      filename: 'logs/app-%DATE%.log',
      datePattern: 'YYYY-MM-DD',
    }),
  ],

  exceptionHandlers: [
    new winston.transports.File({ filename: 'logs/exceptions.log' }),
  ],
  rejectionHandlers: [
    new winston.transports.File({ filename: 'logs/rejections.log' }),
  ],
});

Log Levels in Practice

// Use the right level — over-logging is as bad as under-logging

logger.error('Database connection failed', { error: err.message, stack: err.stack });
// → Always logged, triggers alerts, wakes people up at 3am

logger.warn('Rate limit approaching', { current: 850, limit: 1000 });
// → Logged in production, doesn't alert, should be reviewed

logger.info('User signed up', { userId: '123', plan: 'pro' });
// → Business events you want in production (sign ups, purchases, key actions)

logger.http('GET /api/users 200 45ms');
// → HTTP request logs (use pino-http, not manual)

logger.debug('Cache miss', { key: 'user:123', ttl: 3600 });
// → Dev/staging only — too noisy for production

logger.verbose('Processing step 3 of 7');
// → Detailed traces for debugging specific issues

// Rule of thumb for production:
// - error: things that require immediate action
// - warn: things that might become errors
// - info: important business events only (not every request)
// - debug: disabled in production

Log Aggregation Setup

// Production: ship logs to an aggregator
// Options: Datadog, Grafana Loki, AWS CloudWatch, Better Stack, Logtail

// Pino → Datadog
// In your start script or Docker CMD:
// node app.js | pino-datadog-transport
// Or use @datadog/pino-transport:

import pino from 'pino';

const logger = pino({
  level: 'info',
  transport: {
    targets: [
      // Dev: pretty print
      ...(process.env.NODE_ENV !== 'production' ? [{
        target: 'pino-pretty',
        options: { colorize: true },
      }] : []),
      // Production: Datadog
      ...(process.env.NODE_ENV === 'production' ? [{
        target: '@datadog/pino-transport',
        options: {
          ddClientConf: {
            authMethods: { apiKeyAuth: process.env.DD_API_KEY },
          },
          ddServerConf: { site: 'datadoghq.com' },
          service: 'my-api',
          env: process.env.NODE_ENV,
        },
      }] : []),
    ],
  },
});

Performance Comparison

# Benchmark: 1M log entries, Node.js 20

# Pino:
# ~680K logs/second (JSON mode)
# ~315K logs/second (with pino-pretty in dev)
# Memory: minimal — async write queue

# Winston:
# ~85K logs/second (console transport)
# ~120K logs/second (file transport)
# Memory: slightly higher from format transforms

# Why Pino is faster:
# 1. Async log writes (non-blocking)
# 2. JSON.stringify optimized per-schema
# 3. No transform pipeline overhead
# 4. Uses Worker threads for transports

# Winston's speed is fine for most apps (<100 req/s)
# At 1000+ req/s, Pino's async model prevents log I/O from
# becoming a bottleneck in your hot path

When to Choose

ScenarioPickReason
Production API (high traffic)Pino8x faster, async, zero overhead
Need multiple log destinationsWinstonBuilt-in multi-transport
Simple app, quick setupWinstonMore tutorials, more familiar
Per-request context (request ID)PinoChild loggers are first-class
Custom log transformationWinstonFormat pipeline is powerful
FastifyPinoBuilt-in, zero config
ExpressEitherBoth have middleware
Already using WinstonKeep itMigration cost isn't worth it

Compare Pino and Winston package health on PkgPulse.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.