Skip to main content

PgBouncer vs pgcat vs Supavisor: PostgreSQL Connection Pooling 2026

·PkgPulse Team

PgBouncer vs pgcat vs Supavisor: PostgreSQL Connection Pooling 2026

TL;DR

PostgreSQL's connection model is expensive — each connection creates a new OS process consuming ~5-10MB RAM. At 1,000 concurrent connections, that's 5-10GB just for overhead. Connection poolers sit in front of PostgreSQL and multiplex many application connections onto a small pool of actual DB connections. PgBouncer is the battle-tested standard — single binary, ultralight (~2MB RAM per 1,000 clients), transaction-mode pooling, and used by virtually every major PostgreSQL deployment. pgcat is the modern Rust rewrite — adds query routing (read replicas), multi-tenant sharding, and a configuration-as-code approach while maintaining PgBouncer compatibility. Supavisor is Supabase's Elixir-based pooler — built for serverless and edge workloads, handles hundreds of thousands of connections and multiple tenants by design. For a traditional server app: PgBouncer. For read/write splitting and sharding: pgcat. For serverless workloads and multi-tenant SaaS: Supavisor.

Key Takeaways

  • PostgreSQL max_connections default: 100 — each connection is an OS process; pooling is essential
  • PgBouncer uses ~2MB RAM for 1,000 clients — the lightest pooler available
  • pgcat routes queries to read replicasSELECT → replica, writes → primary, automatic
  • Supavisor supports 1M+ connections — purpose-built for Supabase's multi-tenant architecture
  • Transaction pooling breaks prepared statements — all three have trade-offs vs session pooling
  • pgcat is written in Rust — same connection protocol as PgBouncer, drop-in compatible
  • Supavisor exposes standard PostgreSQL protocol — your app connects as if it's connecting to Postgres directly

Why PostgreSQL Needs Connection Pooling

Without pooling — each request opens a DB connection:
  1,000 concurrent users
    → 1,000 PostgreSQL connections
    → 1,000 OS processes × 5-10MB = 5-10GB RAM just for connections
    → PostgreSQL max_connections exhausted
    → New requests: "FATAL: sorry, too many clients already"

With pooling:
  1,000 concurrent app connections
    → 20 actual PostgreSQL connections
    → Pool reuses connections for transactions
    → 99% RAM reduction for connection overhead

Pooling Modes

Session mode:    1 app connection → 1 DB connection (held entire session)
                 Good compatibility, poor multiplexing

Transaction mode: 1 app connection per transaction → multiple apps share 1 DB connection
                  Best for high concurrency, breaks some session features

Statement mode:  1 DB connection per statement
                 Fastest, breaks transactions and prepared statements

Transaction mode is the standard choice for most applications. PgBouncer, pgcat, and Supavisor all support it.


PgBouncer: The Battle-Tested Standard

PgBouncer has been the default PostgreSQL pooler since 2007. It's single-threaded (by design — PostgreSQL's wire protocol), ultralight, and used by AWS RDS Proxy, Heroku Postgres, and nearly every managed PostgreSQL service.

Installation

# Ubuntu/Debian
apt install pgbouncer

# macOS
brew install pgbouncer

# Docker
docker run -d \
  --name pgbouncer \
  -e DATABASE_URL="postgres://user:pass@db:5432/myapp" \
  -e POOL_MODE=transaction \
  -e MAX_CLIENT_CONN=1000 \
  -e DEFAULT_POOL_SIZE=20 \
  -p 5432:5432 \
  edoburu/pgbouncer

Configuration

# pgbouncer.ini

[databases]
# Format: alias = host=... port=... dbname=... user=...
myapp = host=postgres-primary port=5432 dbname=myapp

# Read replica for read-only connections (manual setup)
myapp_ro = host=postgres-replica port=5432 dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 5432

# Authentication
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

# Pool mode
pool_mode = transaction          # transaction | session | statement

# Limits
max_client_conn = 1000           # Max connections from apps
default_pool_size = 20           # Connections to PostgreSQL per database+user pair
reserve_pool_size = 5            # Reserve for when pool is full

# Timeouts
server_idle_timeout = 600        # Close idle DB connections after 10 minutes
client_idle_timeout = 0          # Never close idle client connections

# Logging
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1

# Admin
admin_users = pgbouncer_admin
stats_users = pgbouncer_stats

# TLS
server_tls_sslmode = require
# /etc/pgbouncer/userlist.txt
# "username" "md5hash_of_password"
"app_user" "md5a1b2c3d4..."
"pgbouncer_admin" "md5..."

Connecting to PgBouncer from Node.js

// PgBouncer is transparent — connect exactly like you would to PostgreSQL
import postgres from "postgres";

const sql = postgres({
  host: "pgbouncer",    // PgBouncer hostname
  port: 5432,           // PgBouncer port
  database: "myapp",
  username: "app_user",
  password: process.env.DB_PASSWORD,
  max: 10,              // Keep this LOW — PgBouncer does the real pooling
  // Transaction mode: disable prepared statements
  prepare: false,       // IMPORTANT for transaction mode
});

// Same with pg (node-postgres)
import { Pool } from "pg";

const pool = new Pool({
  host: "pgbouncer",
  port: 5432,
  database: "myapp",
  user: "app_user",
  password: process.env.DB_PASSWORD,
  max: 10,
});

Monitoring PgBouncer

# Connect to PgBouncer admin console
psql -h pgbouncer -p 5432 -U pgbouncer_admin pgbouncer

# Show pool stats
SHOW POOLS;

# Show active connections
SHOW CLIENTS;

# Show server connections
SHOW SERVERS;

# Show overall stats
SHOW STATS;

# Reload config without restart
RELOAD;

pgcat: Modern Rust Pooler with Query Routing

pgcat is a PostgreSQL pooler and proxy written in Rust. It's fully compatible with PgBouncer's protocol but adds query routing to read replicas, sharding, and a more modern configuration model.

Installation

# Docker (recommended)
docker run -d \
  --name pgcat \
  -p 5432:5432 \
  -v $(pwd)/pgcat.toml:/etc/pgcat/pgcat.toml \
  ghcr.io/postgresml/pgcat:latest

# Binary
# Download from https://github.com/postgresml/pgcat/releases

Configuration

# pgcat.toml

[general]
host = "0.0.0.0"
port = 5432
enable_prometheus_exporter = true
prometheus_exporter_port = 9930
log_level = "info"

[pools.myapp]
  pool_mode = "transaction"
  default_role = "any"          # "primary" | "replica" | "any" (load balance)
  query_parser_enabled = true   # Enable read/write splitting
  primary_reads_enabled = false # Don't send reads to primary when replicas available

  [pools.myapp.users]
    [pools.myapp.users.app_user]
    password = "secretpassword"
    pool_size = 20
    statement_timeout = 30000   # 30 second query timeout

  [pools.myapp.shards]
    # Single server (add more shards for sharding)
    [pools.myapp.shards.0]
    servers = [
      ["postgres-primary", 5432, "primary"],
      ["postgres-replica-1", 5432, "replica"],
      ["postgres-replica-2", 5432, "replica"],
    ]
    database = "myapp"

Query Routing in Action

// pgcat automatically routes based on query type
// No changes needed in your application code

import postgres from "postgres";

const sql = postgres({
  host: "pgcat",
  port: 5432,
  database: "myapp",
  username: "app_user",
  password: process.env.DB_PASSWORD,
  prepare: false,  // Still needed for transaction mode
});

// These automatically go to READ REPLICAS:
const users = await sql`SELECT * FROM users WHERE active = true`;
const post = await sql`SELECT * FROM posts WHERE id = ${postId}`;

// These automatically go to PRIMARY:
await sql`INSERT INTO users (email, name) VALUES (${email}, ${name})`;
await sql`UPDATE posts SET view_count = view_count + 1 WHERE id = ${postId}`;
await sql`DELETE FROM sessions WHERE expires_at < NOW()`;

// Explicit routing with custom annotation
const result = await sql`/* pgcat: primary */ SELECT * FROM users WHERE id = ${userId}`;

Multi-Tenant Sharding

# pgcat.toml — sharding configuration
[pools.myapp_sharded]
  pool_mode = "transaction"
  sharding_function = "pg_bigint_hash"  # Hash-based sharding
  shards = 2

  [pools.myapp_sharded.shards]
    [pools.myapp_sharded.shards.0]
    servers = [
      ["shard-0-primary", 5432, "primary"],
      ["shard-0-replica", 5432, "replica"],
    ]
    database = "myapp"

    [pools.myapp_sharded.shards.1]
    servers = [
      ["shard-1-primary", 5432, "primary"],
      ["shard-1-replica", 5432, "replica"],
    ]
    database = "myapp"
/* Route query to specific shard via comment */
/* pgcat shard: 0 */ SELECT * FROM users WHERE tenant_id = 'abc';
/* pgcat shard: 1 */ SELECT * FROM users WHERE tenant_id = 'xyz';

Supavisor: Serverless-First Pooler

Supavisor is Supabase's connection pooler written in Elixir/OTP. It's designed for serverless workloads where thousands of short-lived connections (Lambda functions, Vercel functions) overwhelm traditional poolers.

Why Serverless Needs Different Pooling

Traditional serverless problem:
  Lambda function starts → opens DB connection → runs query → closes connection
  × 10,000 concurrent invocations = 10,000 connection open/close cycles

PgBouncer limitation:
  - Single-threaded: max ~10,000 clients (CPU-bound)
  - Each connection attempt = PgBouncer overhead

Supavisor advantage:
  - Elixir/OTP: designed for millions of lightweight processes
  - Handles 1M+ client connections efficiently
  - Persistent server-side pool to PostgreSQL (even as clients come/go)

Using Supavisor (Supabase Hosted)

// Supabase automatically provides Supavisor
// Connection string from Supabase dashboard includes pooler port (6543)

// Direct (bypasses pooler — use for migrations, long transactions)
const directUrl = "postgres://user:pass@db.project.supabase.co:5432/postgres";

// Pooled (via Supavisor — use for app queries)
const pooledUrl = "postgres://user:pass@db.project.supabase.co:6543/postgres";

// With Drizzle ORM
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";

// Application queries — pooled connection
const pooledSql = postgres(process.env.SUPABASE_POOLED_URL!, {
  prepare: false,  // Required for Supavisor transaction mode
});
export const db = drizzle(pooledSql);

// Migrations — direct connection (avoids pooler timeout issues)
const migrationSql = postgres(process.env.SUPABASE_DIRECT_URL!);
export const migrationDb = drizzle(migrationSql);

Self-Hosted Supavisor

# docker-compose.yml — self-hosted Supavisor
version: "3.8"

services:
  supavisor:
    image: supabase/supavisor:latest
    ports:
      - "5432:5432"   # PostgreSQL protocol
      - "4000:4000"   # HTTP API
    environment:
      PORT: "4000"
      POSTGRES_PORT: "5432"
      POSTGRES_DB: "postgres"
      DATABASE_URL: "ecto://supavisor:password@postgres:5432/supavisor"
      CLUSTER_POSTGRES: "true"
      SECRET_KEY_BASE: "your-secret-key-base-min-64-chars"
      VAULT_ENC_KEY: "your-32-char-encryption-key"
      API_JWT_SECRET: "your-api-jwt-secret"
      METRICS_JWT_SECRET: "your-metrics-jwt-secret"
      REGION: "us-east-1"
      ERL_AFLAGS: "-proto_dist inet_tcp"

Supavisor API — Add Tenant

// Supavisor manages tenants via HTTP API
const response = await fetch("http://supavisor:4000/api/tenants", {
  method: "PUT",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer ${API_JWT_TOKEN}`,
  },
  body: JSON.stringify({
    tenant: {
      db_host: "postgres",
      db_port: 5432,
      db_name: "myapp",
      db_user: "app_user",
      db_password: "password",
      pool_size: 20,
      pool_mode: "transaction",
      upstream_ssl: false,
    },
  }),
});

Feature Comparison

FeaturePgBouncerpgcatSupavisor
LanguageCRustElixir/OTP
RAM per 1,000 clients~2MB~5MB~10MB
Max connections~10,000~50,0001,000,000+
Read/write splitting✅ Auto
Sharding
Serverless optimized
Multi-tenantManualPartial✅ Native
Pool modesSession/Tx/StmtSession/TxSession/Tx
Prepared statements❌ (tx mode)❌ (tx mode)❌ (tx mode)
Prometheus metrics
Config formatINITOMLHTTP API
Production maturity✅ 15+ yearsGrowingGrowing
GitHub stars2.6k2.8k1.6k

When to Use Each

Choose PgBouncer if:

  • Traditional server app (not serverless) with stable connection counts
  • Simplicity and proven reliability over a decade of production use
  • Minimum resource footprint — 2MB RAM for 1,000 clients is still unbeatable
  • You're on a managed PostgreSQL service (AWS RDS, Heroku) that already includes PgBouncer

Choose pgcat if:

  • You have PostgreSQL read replicas and want automatic query routing without application changes
  • Multi-tenant sharding across multiple PostgreSQL instances is required
  • You want Rust's performance characteristics and PgBouncer protocol compatibility
  • Better observability (per-pool metrics, query statistics) is needed

Choose Supavisor if:

  • Your application runs in serverless functions (Vercel, Lambda, Cloudflare Workers)
  • You're already on Supabase (it's included and configured automatically)
  • You need to handle connection counts in the hundreds of thousands
  • Multi-tenant SaaS where each tenant has an isolated connection pool

Methodology

Data sourced from official PgBouncer documentation (pgbouncer.org), pgcat documentation and GitHub repository (github.com/postgresml/pgcat), Supavisor documentation (supabase.com/docs/guides/database/connecting-to-postgres), benchmarks from pgcat's README, GitHub star counts as of February 2026, and community discussions on the Supabase Discord and r/PostgreSQL. Connection overhead figures from PostgreSQL documentation on connection handling.


Related: Neon vs Supabase Postgres vs Tembo for the PostgreSQL hosting platforms that benefit most from connection pooling, or Drizzle ORM vs Prisma vs TypeORM for the ORM layer that connects through these poolers.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.