Skip to main content

Best Vector Database Clients for JavaScript 2026

·PkgPulse Team

TL;DR

pgvector (via Prisma/Drizzle) wins if you already have Postgres. Pinecone wins if you want zero infrastructure and fast time-to-production. Qdrant wins for self-hosted performance at scale. In 2026, vector search has become infrastructure as common as full-text search — your choice comes down to whether you want a managed service (Pinecone, Weaviate Cloud) or self-hosted (Qdrant, pgvector). The JavaScript SDKs are all solid; the differentiator is the database itself.

Key Takeaways

  • pgvector: free, runs in your existing Postgres, ~500K npm installs (via pg/drizzle), HNSW indexing
  • Pinecone: managed, zero-ops, $0 free tier (1 index, 100K vectors), best managed DX
  • Qdrant: best performance/price ratio for self-hosted, strong filtering, @qdrant/js-client-rest ~50K downloads
  • Weaviate: GraphQL-first, built-in vectorization, weaviate-ts-client ~80K downloads
  • For RAG: pgvector via Prisma is sufficient up to ~10M vectors; Qdrant/Pinecone beyond that

pgvector (Postgres Extension)

Best for: teams already on Postgres, modest scale (<10M vectors), cost optimization

# Enable extension in Postgres:
CREATE EXTENSION IF NOT EXISTS vector;

# Or via Supabase/Neon (already available)
// With Drizzle ORM (recommended):
// npm install drizzle-orm pg @types/pg
import { pgTable, text, integer, index } from 'drizzle-orm/pg-core';
import { vector } from 'drizzle-orm/pg-core';

export const documents = pgTable('documents', {
  id: text('id').primaryKey().default(sql`gen_random_uuid()`),
  content: text('content').notNull(),
  embedding: vector('embedding', { dimensions: 1536 }),  // OpenAI text-embedding-3-small
  metadata: jsonb('metadata'),
}, (table) => ({
  // HNSW index for fast approximate nearest-neighbor search:
  embeddingIndex: index('embedding_hnsw_idx')
    .using('hnsw')
    .on(table.embedding.op('vector_cosine_ops')),
}));
// Insert with embedding:
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';

async function insertDocument(content: string, metadata: Record<string, unknown>) {
  const { embedding } = await embed({
    model: openai.embedding('text-embedding-3-small'),
    value: content,
  });

  return db.insert(documents).values({
    content,
    embedding,
    metadata,
  });
}

// Similarity search with Drizzle:
async function semanticSearch(query: string, limit = 10) {
  const { embedding: queryEmbedding } = await embed({
    model: openai.embedding('text-embedding-3-small'),
    value: query,
  });

  // cosine distance (lower = more similar):
  return db
    .select({
      id: documents.id,
      content: documents.content,
      similarity: sql<number>`1 - (embedding <=> ${JSON.stringify(queryEmbedding)}::vector)`,
    })
    .from(documents)
    .orderBy(sql`embedding <=> ${JSON.stringify(queryEmbedding)}::vector`)
    .limit(limit);
}

Pinecone

Best for: managed zero-ops vector search, rapid prototyping, <100M vectors

npm install @pinecone-database/pinecone
import { Pinecone } from '@pinecone-database/pinecone';

const pc = new Pinecone({ apiKey: process.env.PINECONE_API_KEY! });

// Create or connect to an index:
const index = pc.index('documents');

// Upsert vectors:
async function upsertDocuments(docs: Array<{ id: string; content: string; metadata: object }>) {
  const embeddings = await generateEmbeddings(docs.map(d => d.content));

  await index.upsert(
    docs.map((doc, i) => ({
      id: doc.id,
      values: embeddings[i],
      metadata: { content: doc.content, ...doc.metadata },
    }))
  );
}

// Query:
async function search(query: string, topK = 10, filter?: object) {
  const queryEmbedding = await generateEmbedding(query);

  const result = await index.query({
    vector: queryEmbedding,
    topK,
    includeMetadata: true,
    filter,  // Metadata filtering: { category: { $eq: 'docs' } }
  });

  return result.matches.map(m => ({
    id: m.id,
    score: m.score,
    content: m.metadata?.content as string,
  }));
}

Qdrant

Best for: self-hosted, high performance, complex filtering, >10M vectors

docker run -p 6333:6333 qdrant/qdrant
npm install @qdrant/js-client-rest
import { QdrantClient } from '@qdrant/js-client-rest';

const client = new QdrantClient({ url: 'http://localhost:6333' });

// Create collection:
await client.createCollection('documents', {
  vectors: {
    size: 1536,
    distance: 'Cosine',
    on_disk: true,  // Offload to disk for large collections
  },
  optimizers_config: {
    memmap_threshold: 100000,  // Mmap for large collections
  },
  hnsw_config: { m: 16, ef_construct: 100 },
});

// Upsert with payload (metadata):
await client.upsert('documents', {
  wait: true,
  points: docs.map((doc, i) => ({
    id: doc.id,
    vector: embeddings[i],
    payload: { content: doc.content, category: doc.category, date: doc.date },
  })),
});

// Search with complex filter:
const result = await client.search('documents', {
  vector: queryEmbedding,
  limit: 10,
  score_threshold: 0.7,
  filter: {
    must: [
      { key: 'category', match: { value: 'technical' } },
      { key: 'date', range: { gte: '2025-01-01' } },
    ],
  },
  with_payload: true,
});

Comparison Table

pgvectorPineconeQdrantWeaviate
HostingSelf/Supabase/NeonManaged onlySelf or CloudSelf or Cloud
Free tier∞ (Postgres)1 index, 100K vectorsSelf-hosted freeSelf-hosted free
Scale limit~100M (practical)BillionsBillionsBillions
FilteringSQL (full Postgres)Metadata filtersComplex nestedGraphQL
Hybrid search❌ (manual)
SDK qualityExcellent (via Drizzle)GoodGoodComplex
Best fitExisting PostgresFast time-to-prodSelf-hosted scaleBuilt-in vectorize

Recommendation

Use pgvector if:
  → Already on Postgres (Supabase, Neon, Railway)
  → <10M vectors
  → Want SQL filtering power
  → Cost is a concern

Use Pinecone if:
  → Want zero infrastructure management
  → Rapid prototyping / MVP
  → Small to medium scale (up to 100M vectors)

Use Qdrant if:
  → Self-hosting for cost/performance
  → Need advanced filtering
  → Scale >10M vectors with complex queries

Explore vector database package health scores on PkgPulse.

Comments

Stay Updated

Get the latest package insights, npm trends, and tooling tips delivered to your inbox.