TL;DR
archiver is the best choice for server-side archive creation — it uses Node.js streams, supports ZIP and TAR with all compression levels, and handles large archives efficiently without loading everything into memory. adm-zip has a synchronous, object-oriented API — easy to use for small archives and extraction, but loads everything into memory (bad for large files). JSZip is the browser-compatible option — works in browsers and Node.js, promise-based API, best when you need client-side ZIP generation (download a ZIP from the browser). In 2026: archiver for server-side creation, JSZip for browser-side, adm-zip for simple read/write of small ZIPs.
Key Takeaways
- archiver: ~4M weekly downloads — streaming Node.js API, ZIP + TAR, handles large archives
- adm-zip: ~3M weekly downloads — synchronous in-memory API, simple extract/create for small files
- jszip: ~8M weekly downloads — browser + Node.js, promise-based, client-side ZIP generation
- archiver uses Node.js streams — no memory issues with large files
- adm-zip loads entire archive into memory — avoid for files >50MB
- JSZip's browser support makes it unique — generate ZIP files in the browser for user download
archiver
archiver — streaming archive creation:
Create a ZIP
import archiver from "archiver"
import fs from "fs"
import path from "path"
// Create an output stream:
const output = fs.createWriteStream("archive.zip")
const archive = archiver("zip", {
zlib: { level: 9 }, // 0=no compression, 9=maximum compression
})
// Pipe archive to the file:
archive.pipe(output)
// Add a single file:
archive.file("src/index.ts", { name: "index.ts" })
// Add a file with a different name in the archive:
archive.file("dist/app.js", { name: "app/bundle.js" })
// Add a buffer as a file:
const content = Buffer.from("# README\nThis is a readme file.\n")
archive.append(content, { name: "README.md" })
// Add an entire directory:
archive.directory("src/", "source") // src/ → source/ in archive
archive.directory("public/", false) // public/ → root of archive
// Add files matching a glob:
archive.glob("**/*.ts", { cwd: "src" })
// Handle events:
output.on("close", () => {
console.log(`Archive created: ${archive.pointer()} bytes`)
})
archive.on("warning", (err) => {
if (err.code === "ENOENT") console.warn(err)
else throw err
})
archive.on("error", (err) => {
throw err
})
// Finalize:
await archive.finalize()
Stream ZIP as Express response (download)
import express from "express"
import archiver from "archiver"
import fs from "fs"
const app = express()
app.get("/download/project", async (req, res) => {
const projectId = req.params.id
// Set response headers for file download:
res.setHeader("Content-Type", "application/zip")
res.setHeader("Content-Disposition", `attachment; filename="project-${projectId}.zip"`)
// Create archive and pipe directly to response:
const archive = archiver("zip", { zlib: { level: 6 } })
archive.pipe(res)
// Add project files:
archive.directory(`/projects/${projectId}/src`, "src")
archive.directory(`/projects/${projectId}/public`, "public")
archive.file(`/projects/${projectId}/package.json`, { name: "package.json" })
archive.file(`/projects/${projectId}/README.md`, { name: "README.md" })
await archive.finalize()
// Streams directly to client — no temp file needed!
})
TAR with gzip compression
import archiver from "archiver"
import fs from "fs"
const output = fs.createWriteStream("backup.tar.gz")
const archive = archiver("tar", {
gzip: true,
gzipOptions: { level: 6 },
})
archive.pipe(output)
archive.directory("data/", "data")
archive.glob("logs/**/*.log", { cwd: "/var/app" })
await archive.finalize()
Progress tracking
import archiver from "archiver"
const archive = archiver("zip")
archive.on("progress", (progress) => {
const { entries, fs: fsInfo } = progress
console.log(`Entries: ${entries.processed}/${entries.total}`)
console.log(`Bytes processed: ${fsInfo.processedBytes}`)
})
archive.on("entry", (entry) => {
console.log(`Added: ${entry.name}`)
})
adm-zip
adm-zip — synchronous in-memory ZIP:
Create a ZIP
import AdmZip from "adm-zip"
const zip = new AdmZip()
// Add files from disk:
zip.addLocalFile("src/index.ts")
zip.addLocalFile("package.json")
// Add a directory:
zip.addLocalFolder("src", "src") // src/ → src/ in archive
// Add from buffer:
zip.addFile("config.json", Buffer.from(JSON.stringify({ version: "1.0.0" })))
// Write to file:
zip.writeZip("archive.zip")
// Or get as buffer:
const buffer = zip.toBuffer()
// Send as response:
res.setHeader("Content-Type", "application/zip")
res.send(buffer)
Extract a ZIP
import AdmZip from "adm-zip"
// Extract all to a directory:
const zip = new AdmZip("archive.zip")
zip.extractAllTo("/output/", true) // true = overwrite existing
// Extract a specific file:
zip.extractEntryTo("src/index.ts", "/output/src/", false, true)
// Read a file from ZIP without extracting:
const entry = zip.getEntry("package.json")
if (entry) {
const content = entry.getData().toString("utf8")
const pkg = JSON.parse(content)
console.log(pkg.name, pkg.version)
}
// List all entries:
zip.getEntries().forEach((entry) => {
if (!entry.isDirectory) {
console.log(entry.entryName, entry.header.size)
}
})
Read/modify existing ZIP
import AdmZip from "adm-zip"
// Open existing ZIP:
const zip = new AdmZip("existing.zip")
// Update a file inside the ZIP:
zip.updateFile("README.md", Buffer.from("# Updated README\n"))
// Add a new file to existing ZIP:
zip.addFile("CHANGELOG.md", Buffer.from("# Changelog\n\n## v2.0.0\n"))
// Delete a file:
zip.deleteFile("old-file.txt")
// Save changes:
zip.writeZip("existing.zip") // Overwrite
// Or: zip.writeZip("updated.zip") // New file
JSZip
JSZip — browser + Node.js ZIP:
Create a ZIP (browser-compatible)
import JSZip from "jszip"
import { saveAs } from "file-saver" // Browser: triggers download
// Create ZIP in browser:
const zip = new JSZip()
// Add files:
zip.file("README.md", "# Project\nGenerated on " + new Date().toISOString())
zip.file("data.json", JSON.stringify({ version: "1.0.0" }, null, 2))
// Add a folder:
const srcFolder = zip.folder("src")
srcFolder?.file("index.ts", "export const hello = 'world'")
srcFolder?.file("utils.ts", "export function add(a: number, b: number) { return a + b }")
// Generate ZIP:
const blob = await zip.generateAsync({ type: "blob" })
// Browser — trigger download:
saveAs(blob, "project.zip")
React file download example
import JSZip from "jszip"
import { useState } from "react"
interface FileData {
name: string
content: string
}
function ExportButton({ files }: { files: FileData[] }) {
const [loading, setLoading] = useState(false)
async function downloadZip() {
setLoading(true)
try {
const zip = new JSZip()
for (const file of files) {
zip.file(file.name, file.content)
}
const content = await zip.generateAsync({
type: "blob",
compression: "DEFLATE",
compressionOptions: { level: 6 },
})
// Create download link:
const url = URL.createObjectURL(content)
const link = document.createElement("a")
link.href = url
link.download = "export.zip"
link.click()
URL.revokeObjectURL(url)
} finally {
setLoading(false)
}
}
return (
<button onClick={downloadZip} disabled={loading}>
{loading ? "Generating..." : "Download ZIP"}
</button>
)
}
Read ZIP (browser or Node.js)
import JSZip from "jszip"
import fs from "fs"
// Node.js — read from file:
const zipBuffer = fs.readFileSync("archive.zip")
const zip = await JSZip.loadAsync(zipBuffer)
// Browser — from File input:
// const zip = await JSZip.loadAsync(event.target.files[0])
// Iterate and read files:
const files: Record<string, string> = {}
for (const [filename, zipEntry] of Object.entries(zip.files)) {
if (!zipEntry.dir) {
const content = await zipEntry.async("string")
files[filename] = content
console.log(`${filename}: ${content.length} chars`)
}
}
Generate as different types
// JSZip supports multiple output types:
const zip = new JSZip()
zip.file("test.txt", "Hello")
// For browsers:
const blob = await zip.generateAsync({ type: "blob" })
const uint8array = await zip.generateAsync({ type: "uint8array" })
const base64 = await zip.generateAsync({ type: "base64" })
// For Node.js:
const buffer = await zip.generateAsync({ type: "nodebuffer" })
const stream = zip.generateNodeStream({ type: "nodebuffer", streamFiles: true })
stream.pipe(fs.createWriteStream("output.zip"))
Feature Comparison
| Feature | archiver | adm-zip | JSZip |
|---|---|---|---|
| Streaming (no full memory) | ✅ | ❌ | ❌ |
| Browser support | ❌ | ❌ | ✅ |
| Sync API | ❌ | ✅ | ❌ |
| TAR support | ✅ | ❌ | ❌ |
| Extraction | ❌ | ✅ | ✅ |
| Modify existing ZIP | ❌ | ✅ | ✅ |
| Large file support | ✅ | ⚠️ | ⚠️ |
| Password protection | ❌ | ✅ (AES) | ❌ |
| TypeScript | ✅ | ✅ | ✅ |
| Weekly downloads | ~4M | ~3M | ~8M |
When to Use Each
Choose archiver if:
- Creating large archives on the server (logs, backups, user exports)
- Need streaming output — pipe directly to HTTP response or S3 upload
- Building TAR/TAR.GZ archives in addition to ZIP
- Performance matters: don't load entire archive into memory
Choose adm-zip if:
- Reading and modifying existing ZIP files
- Synchronous operations in simple scripts or CLIs
- Small archives where memory usage isn't a concern
- Need AES password protection for archives
Choose JSZip if:
- Generating ZIP files in the browser (client-side export)
- Cross-platform code that runs in both browser and Node.js
- Need a promise-based async API
- Building a download ZIP button in a React/Vue app
Streaming Archives to S3 and Cloud Storage
The streaming architecture of archiver enables a pattern that adm-zip and JSZip cannot support: piping an archive directly to an S3 upload without first writing to disk or accumulating a complete buffer in memory. The AWS SDK v3's Upload class accepts a readable stream, so you can pipe an archiver output directly into an S3 multipart upload. This means you can create a 2GB archive of user export data and upload it to S3 using only the memory required for the streaming buffer — typically a few megabytes — regardless of the total archive size.
The pattern requires connecting archiver's output stream to the S3 upload input. You create a PassThrough stream as an intermediary, start the S3 Upload with the PassThrough as the body, pipe archiver into the PassThrough, then finalize archiver. The upload and archiving proceed concurrently: as archiver produces compressed chunks, they flow directly into the S3 multipart upload. This pattern is common in data export APIs, backup services, and any workflow where you're aggregating files from multiple sources — local disk, database blobs, or external API responses — into a single downloadable archive without a temporary file step.
adm-zip's toBuffer() call and JSZip's generateAsync({ type: "nodebuffer" }) both require the complete archive to exist in memory before you can start an upload. For archives under 50MB this is acceptable, but for larger exports the memory spike can cause Lambda function OOM errors or degrade server performance under concurrent load.
Extracting and Reading Archives: adm-zip vs JSZip
Neither archiver nor any streaming library handles extraction well — archiver is write-only. For reading existing ZIP files, adm-zip and JSZip have distinct advantages depending on context.
adm-zip is faster for synchronous extraction of small archives because it reads the ZIP's central directory index directly, allowing random access to individual files without streaming through the entire archive. zip.getEntry("config.json") retrieves a specific file without reading surrounding entries. This makes adm-zip practical for scenarios like: reading a VS Code extension's package.json from inside a .vsix file, extracting a specific JSON config from an uploaded archive, or inspecting an npm package tarball's package.json during a registry operation.
JSZip's extraction model is promise-based and better suited to browser environments where you're dealing with File objects from drag-and-drop uploads. JSZip.loadAsync(file) accepts a File, Blob, ArrayBuffer, or Uint8Array, making it the only practical option when users upload ZIP files in a web application. The async API also handles large archives in the browser more gracefully because it processes chunks without blocking the main thread.
For server-side extraction of large archives, neither library is optimal — the unzipper or yauzl packages are purpose-built for streaming extraction and handle large ZIPs without loading the complete file into memory.
Compression Levels and File Size Trade-offs
All three libraries support ZIP deflate compression, but they surface the controls differently. archiver exposes zlib: { level: 0-9 } where level 0 is store-only (no compression, fastest) and level 9 is maximum compression (slowest). The default is level 6 — a good balance for most content. For log files and text-heavy data, level 9 might reduce size by 15-20% compared to level 6 while taking 2-3x longer. For already-compressed content like JPEG images or MP4 videos, any deflate compression level adds CPU time without meaningfully reducing file size; using archiver("zip", { zlib: { level: 0 } }) avoids this wasted work.
JSZip exposes the same compression: "DEFLATE" and compressionOptions: { level: 1-9 } interface. adm-zip uses compression level 8 by default and doesn't expose fine-grained control. For browser-side generation where CPU time translates directly to UI responsiveness, JSZip with level 3-4 typically produces acceptable compression ratios while completing fast enough to avoid blocking the download trigger.
Handling Encoding and Unicode Filenames
ZIP archives have a complicated history with character encoding that can cause subtle bugs when filenames contain non-ASCII characters. The original ZIP specification used IBM Code Page 437 for filenames; modern archives use UTF-8 with the language encoding flag (EFS) set to indicate UTF-8. Most tools and libraries now correctly generate and read UTF-8 filenames, but legacy archives from older Windows tools may use the system's local code page. archiver always writes UTF-8 filenames. adm-zip reads the EFS flag and decodes accordingly, but archives without the flag and non-ASCII characters may decode incorrectly on non-matching systems. JSZip has the same behavior — it defaults to UTF-8 and provides no automatic fallback for legacy encodings. When processing user-uploaded archives that may have been created on Windows with non-Latin filenames (Chinese, Japanese, Russian characters), validate the encoding by checking whether decoded filenames are valid Unicode and falling back to a detected encoding using a library like chardet if necessary. This edge case is rare enough that most applications never encounter it, but it's worth knowing when building tools that process archives from diverse sources.
Security Considerations for Archive Handling
Archive libraries introduce security risks that deserve explicit attention, particularly when processing user-uploaded ZIP files. The most dangerous vulnerability is the zip slip attack: a maliciously crafted ZIP file can contain entries with path traversal sequences like ../../etc/cron.d/malicious, and if your extraction code writes entries to disk without sanitizing the path, the malicious file lands outside the intended extraction directory. adm-zip's extractAllTo(destination, true) is vulnerable to zip slip unless you validate each entry path before extraction. The correct mitigation is to check that path.resolve(destination, entry.entryName).startsWith(path.resolve(destination)) before writing. JSZip requires the same check when iterating zip.files and writing to disk. Neither library validates paths for you by default. Additionally, ZIP bombs — highly compressed archives that decompress to gigabytes — can exhaust disk space or memory. Implement a decompressed size limit by reading each entry's header.size in adm-zip or zipEntry.uncompressedSize before extracting, and abort if the total exceeds your threshold.
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on archiver v5.x, adm-zip v0.5.x, and jszip v3.x.
Compare file processing and utility packages on PkgPulse →
See also: cac vs meow vs arg 2026 and cosmiconfig vs lilconfig vs conf, chalk vs kleur vs colorette (2026).