Skip to main content

Prometheus vs VictoriaMetrics vs Grafana Mimir: Metrics Storage 2026

·PkgPulse Team

Prometheus vs VictoriaMetrics vs Grafana Mimir: Metrics Storage 2026

TL;DR

Prometheus is the industry standard pull-based metrics system — it scrapes metrics from your services and stores them locally. But Prometheus has limits: 15-day default retention, single-node architecture, and no built-in long-term storage. VictoriaMetrics solves this — a Prometheus-compatible time series database that compresses data 10x better, scales to billions of data points, and runs as a single binary with much lower resource usage. Grafana Mimir takes the distributed path — a horizontally scalable, multi-tenant Prometheus that stores data in object storage (S3, GCS) for unlimited long-term retention. For single-server self-hosted: Prometheus + VictoriaMetrics as long-term storage. For high-scale multi-tenant monitoring: Grafana Mimir. For getting started with minimal complexity: Prometheus alone.

Key Takeaways

  • Prometheus default retention: 15 days — longer requires either remote write or more disk
  • VictoriaMetrics uses 10x less disk than Prometheus for the same data (better compression)
  • All three are PromQL compatible — same dashboards and alerting rules work unchanged
  • Grafana Mimir requires object storage — S3 or GCS, no local disk for long-term data
  • VictoriaMetrics single binary handles ingestion + storage + query — simpler than Mimir's microservices
  • prom-client for Node.js works with all three — it exposes metrics in Prometheus format
  • Grafana visualizes all three — the same Grafana dashboards work regardless of backend

The Metrics Stack Architecture

Node.js App
  └── prom-client (exposes /metrics endpoint)
        │
        ▼
Metrics Scraper (pulls every 15 seconds)
  ├── Prometheus (stores locally, 15-day default retention)
  ├── VictoriaMetrics (stores locally, excellent compression, long-term)
  └── Grafana Agent → Grafana Mimir (stores in S3, unlimited, multi-tenant)
        │
        ▼
Grafana (visualization — queries PromQL from any backend)

Node.js Metrics with prom-client

Before comparing the backends, instrument your Node.js app — all three use the same format:

Installation

npm install prom-client

Express/Fastify Metrics Endpoint

import { Registry, Counter, Histogram, Gauge, collectDefaultMetrics } from "prom-client";
import express from "express";

// Create a registry
const register = new Registry();

// Collect default Node.js metrics (event loop, GC, memory, etc.)
collectDefaultMetrics({ register });

// Custom metrics
const httpRequestDuration = new Histogram({
  name: "http_request_duration_seconds",
  help: "Duration of HTTP requests in seconds",
  labelNames: ["method", "route", "status_code"],
  buckets: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10],
  registers: [register],
});

const httpRequestTotal = new Counter({
  name: "http_requests_total",
  help: "Total number of HTTP requests",
  labelNames: ["method", "route", "status_code"],
  registers: [register],
});

const activeConnections = new Gauge({
  name: "active_connections",
  help: "Number of active WebSocket connections",
  registers: [register],
});

// Middleware to instrument all routes
const app = express();

app.use((req, res, next) => {
  const end = httpRequestDuration.startTimer({
    method: req.method,
    route: req.route?.path ?? req.path,
  });

  res.on("finish", () => {
    const labels = {
      method: req.method,
      route: req.route?.path ?? req.path,
      status_code: res.statusCode,
    };
    end(labels);
    httpRequestTotal.inc(labels);
  });

  next();
});

// Metrics endpoint — scraped by Prometheus/VM/Mimir agent
app.get("/metrics", async (req, res) => {
  res.set("Content-Type", register.contentType);
  res.end(await register.metrics());
});

// Example route with custom business metrics
const ordersCreated = new Counter({
  name: "orders_created_total",
  help: "Total orders created",
  labelNames: ["status", "payment_method"],
  registers: [register],
});

app.post("/orders", async (req, res) => {
  // ... create order
  ordersCreated.inc({ status: "success", payment_method: "stripe" });
  res.json({ orderId: "..." });
});

Prometheus: The Pull-Based Standard

Prometheus is a pull-based time series database. It scrapes /metrics endpoints on a schedule and stores the data locally.

Docker Compose Setup

# docker-compose.yml — Prometheus + Grafana
version: "3.8"

services:
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
      - "--storage.tsdb.retention.time=30d"  # 30 days retention
      - "--storage.tsdb.path=/prometheus"
      - "--web.enable-remote-write-receiver"  # Accept remote writes

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      GF_SECURITY_ADMIN_PASSWORD: "admin"
      GF_USERS_ALLOW_SIGN_UP: "false"

  node-exporter:
    image: prom/node-exporter:latest
    ports:
      - "9100:9100"
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro

volumes:
  prometheus_data:
  grafana_data:

Prometheus Configuration

# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    environment: production

# Alerting
alerting:
  alertmanagers:
    - static_configs:
        - targets: ["alertmanager:9093"]

# Load alerting rules
rule_files:
  - "alerts/*.yml"

# Scrape configs
scrape_configs:
  # Scrape Prometheus itself
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]

  # Scrape Node.js app
  - job_name: "my-app"
    static_configs:
      - targets: ["app:3000"]
    metrics_path: /metrics
    scheme: http

  # Scrape Node Exporter (system metrics)
  - job_name: "node-exporter"
    static_configs:
      - targets: ["node-exporter:9100"]

  # Kubernetes pod discovery (if using K8s)
  - job_name: "kubernetes-pods"
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: "true"

Alert Rules

# alerts/nodejs.yml
groups:
  - name: nodejs
    rules:
      - alert: HighErrorRate
        expr: |
          rate(http_requests_total{status_code=~"5.."}[5m])
          /
          rate(http_requests_total[5m]) > 0.05
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "High error rate: {{ $value | humanizePercentage }}"

      - alert: SlowRequests
        expr: |
          histogram_quantile(0.99, rate(http_request_duration_seconds_bucket[5m])) > 2
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "P99 latency above 2s: {{ $value }}s"

VictoriaMetrics: High-Performance Drop-In

VictoriaMetrics is a time series database compatible with Prometheus. It uses a unique storage format achieving 10x better compression and handling significantly more data on the same hardware.

Single-Node Setup

# docker-compose.yml — VictoriaMetrics replaces Prometheus storage
version: "3.8"

services:
  victoriametrics:
    image: victoriametrics/victoria-metrics:latest
    ports:
      - "8428:8428"
    volumes:
      - vm_data:/victoria-metrics-data
    command:
      - "--storageDataPath=/victoria-metrics-data"
      - "--retentionPeriod=365"  # 1 year retention (Prometheus: 15 days default)
      - "--httpListenAddr=:8428"

  vmagent:
    image: victoriametrics/vmagent:latest
    ports:
      - "8429:8429"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml  # Same config!
      - vmagent_data:/tmp/vmagent-remotewrite-data
    command:
      - "--promscrape.config=/etc/prometheus/prometheus.yml"
      - "--remoteWrite.url=http://victoriametrics:8428/api/v1/write"

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana

volumes:
  vm_data:
  vmagent_data:
  grafana_data:

PromQL Compatibility

# These PromQL queries work identically on Prometheus and VictoriaMetrics

# P99 request latency
histogram_quantile(
  0.99,
  rate(http_request_duration_seconds_bucket[5m])
)

# Request rate by route
sum(rate(http_requests_total[5m])) by (route)

# Error rate
sum(rate(http_requests_total{status_code=~"5.."}[5m]))
/
sum(rate(http_requests_total[5m]))

# Active connections
active_connections

# Memory usage
process_resident_memory_bytes / 1024 / 1024  # MB

MetricsQL (VictoriaMetrics Extensions)

# VictoriaMetrics adds MetricsQL extensions to PromQL

# Keep last value for sparse metrics
keep_last_value(some_gauge)

# Share of each time series in the total
sum(rate(http_requests_total[5m])) by (route)
/
ignoring(route) group_left sum(rate(http_requests_total[5m]))

# Running average
running_avg(rate(http_requests_total[5m]))

Grafana Mimir: Distributed Long-Term Metrics

Grafana Mimir stores metrics in object storage (S3, GCS, Azure Blob) — enabling unlimited retention at low cost and horizontal scaling for high-ingestion environments.

Docker Compose (Single-Binary Mode)

# docker-compose.yml — Mimir in single-binary mode (production uses microservices)
version: "3.8"

services:
  mimir:
    image: grafana/mimir:latest
    command:
      - "--config.file=/etc/mimir/mimir.yaml"
    ports:
      - "9009:9009"
    volumes:
      - ./mimir.yaml:/etc/mimir/mimir.yaml
      - mimir_data:/data

  grafana-agent:
    image: grafana/agent:latest
    ports:
      - "12345:12345"
    volumes:
      - ./agent.yaml:/etc/agent/config.yaml
    command:
      - "--config.file=/etc/agent/config.yaml"

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana

volumes:
  mimir_data:
  grafana_data:
# mimir.yaml
target: all  # Single binary mode

common:
  storage:
    backend: filesystem  # For dev; use s3 in production
    filesystem:
      dir: /data

blocks_storage:
  filesystem:
    dir: /data/blocks

alertmanager_storage:
  filesystem:
    dir: /data/alertmanager

ruler_storage:
  filesystem:
    dir: /data/rules

memberlist:
  join_members:
    - "mimir:7946"

limits:
  max_global_series_per_user: 1500000
# agent.yaml — Grafana Agent to scrape and send to Mimir
metrics:
  global:
    scrape_interval: 15s
    remote_write:
      - url: http://mimir:9009/api/v1/push
        headers:
          X-Scope-OrgID: demo-tenant  # Required for multi-tenancy

  configs:
    - name: default
      scrape_configs:
        - job_name: my-app
          static_configs:
            - targets: ["app:3000"]

Feature Comparison

FeaturePrometheusVictoriaMetricsGrafana Mimir
ArchitectureSingle-nodeSingle-node (+ cluster)Distributed microservices
StorageLocal diskLocal diskObject storage (S3/GCS)
Default retention15 daysAny (unlimited)Unlimited
Disk efficiencyBaseline✅ 10x betterGood (object storage)
PromQL compatible✅ Native✅ + MetricsQL
Multi-tenancyPartial✅ Native
High availabilityManual✅ Cluster mode✅ Built-in
Setup complexityLowLowHigh
RAM at 1M series~8 GB~1 GBDistributed
AlertingAlertmanager✅ + vmalert✅ Compatible
Remote write✅ (send)✅ (send/receive)✅ (receive)
GitHub stars55k13k4k

When to Use Each

Choose Prometheus if:

  • You're getting started with metrics and want the most documentation and community resources
  • Short-term retention (15-30 days) is sufficient for your use case
  • Single server, small-to-medium scale (under 1M active series)
  • Kubernetes Service Monitor and PodMonitor resources are in your workflow

Choose VictoriaMetrics if:

  • You want Prometheus compatibility with dramatically better resource efficiency
  • Long-term retention (months or years) without object storage costs
  • You're migrating from Prometheus and want a drop-in upgrade
  • Single-binary simplicity vs Mimir's microservice complexity is preferred

Choose Grafana Mimir if:

  • You need unlimited retention stored in cheap object storage (S3 pricing)
  • Multi-tenant metrics isolation is required (SaaS platforms monitoring multiple customers)
  • Horizontal scaling to billions of series across multiple ingest nodes is needed
  • You're already in the Grafana ecosystem (Loki, Tempo, Grafana Cloud)

Methodology

Data sourced from official Prometheus, VictoriaMetrics, and Grafana Mimir documentation, VictoriaMetrics benchmark blog posts and comparison articles (victoriametrics.com/blog), and community benchmarks from r/devops and the CNCF observability working group. Resource usage figures from VictoriaMetrics' published benchmarks comparing against Prometheus with equivalent data. GitHub star counts as of February 2026.


Related: OpenTelemetry vs Sentry vs Datadog for distributed tracing and error monitoring, or Langfuse vs LangSmith vs Helicone for AI-specific observability.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.