Skip to main content

File Uploads in Node.js 2026: Multer, S3, Busboy

·PkgPulse Team
0

TL;DR

Multer for Express apps, Busboy for streaming to S3 (no disk writes), Formidable for framework-agnostic parsing. Multer is the most popular (10M+ weekly downloads) and the easiest for basic Express file upload with disk storage. For production, stream directly to S3 with Busboy — never write files to disk on your servers. Both handle multipart/form-data; the difference is where files go and how much memory you use.

Key Takeaways

  • Multer: Express middleware, disk or memory storage, simple API, 10M+ downloads
  • Busboy: Streaming parser, no disk writes, pipe directly to S3 — fastest for large files
  • Formidable: Framework-agnostic, works with any Node.js HTTP server or framework
  • Size limits are critical — always set limits.fileSize to prevent DoS attacks
  • Type validation — check MIME type AND file extension (both can be spoofed; check both)

Option 1: Multer (Express)

npm install multer @types/multer

Disk Storage

// upload.ts
import multer from 'multer';
import path from 'path';
import crypto from 'crypto';

const storage = multer.diskStorage({
  destination: (req, file, cb) => {
    cb(null, 'uploads/');  // Create this directory
  },
  filename: (req, file, cb) => {
    // Random filename to prevent collisions and path traversal
    const uniqueName = crypto.randomBytes(16).toString('hex');
    const ext = path.extname(file.originalname).toLowerCase();
    cb(null, `${uniqueName}${ext}`);
  },
});

const fileFilter = (req: Express.Request, file: Express.Multer.File, cb: multer.FileFilterCallback) => {
  const allowedMimes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
  if (allowedMimes.includes(file.mimetype)) {
    cb(null, true);
  } else {
    cb(new Error(`File type not allowed: ${file.mimetype}`));
  }
};

export const upload = multer({
  storage,
  fileFilter,
  limits: {
    fileSize: 10 * 1024 * 1024,  // 10MB max
    files: 5,                     // Max 5 files per request
  },
});
// routes.ts
import express from 'express';
import { upload } from './upload';

const router = express.Router();

// Single file
router.post('/avatar', upload.single('avatar'), (req, res) => {
  if (!req.file) return res.status(400).json({ error: 'No file uploaded' });

  res.json({
    filename: req.file.filename,
    originalName: req.file.originalname,
    size: req.file.size,
    mimeType: req.file.mimetype,
    path: `/uploads/${req.file.filename}`,
  });
});

// Multiple files (same field)
router.post('/photos', upload.array('photos', 5), (req, res) => {
  const files = req.files as Express.Multer.File[];
  res.json({ uploaded: files.map(f => f.filename) });
});

// Multiple fields
router.post('/profile', upload.fields([
  { name: 'avatar', maxCount: 1 },
  { name: 'documents', maxCount: 3 },
]), (req, res) => {
  const files = req.files as { [fieldname: string]: Express.Multer.File[] };
  res.json({
    avatar: files.avatar?.[0]?.filename,
    documents: files.documents?.map(f => f.filename),
  });
});

Memory Storage (for processing before saving)

const memoryUpload = multer({
  storage: multer.memoryStorage(),
  limits: { fileSize: 5 * 1024 * 1024 },  // 5MB — careful with memory
});

router.post('/image-resize', memoryUpload.single('image'), async (req, res) => {
  if (!req.file) return res.status(400).json({ error: 'No file' });

  // Process in memory (e.g., with Sharp)
  const resized = await sharp(req.file.buffer)
    .resize(800, 600, { fit: 'inside' })
    .webp({ quality: 80 })
    .toBuffer();

  // Then upload to S3
  await s3.putObject({
    Bucket: 'my-bucket',
    Key: `images/${Date.now()}.webp`,
    Body: resized,
    ContentType: 'image/webp',
  }).promise();

  res.json({ success: true });
});

Option 2: Busboy (Stream Directly to S3)

npm install busboy @aws-sdk/client-s3 @aws-sdk/lib-storage
npm install -D @types/busboy
// Stream upload directly to S3 — never touches disk
import Busboy from 'busboy';
import { S3Client } from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage';
import { PassThrough } from 'stream';

const s3 = new S3Client({ region: 'us-east-1' });

export async function handleUpload(req: Request): Promise<{ key: string; size: number }> {
  return new Promise((resolve, reject) => {
    const busboy = Busboy({
      headers: req.headers as Record<string, string>,
      limits: { fileSize: 50 * 1024 * 1024 },  // 50MB
    });

    busboy.on('file', (fieldname, fileStream, info) => {
      const { filename, mimeType } = info;

      // Validate MIME type
      const allowed = ['image/jpeg', 'image/png', 'image/webp'];
      if (!allowed.includes(mimeType)) {
        fileStream.resume();  // Drain the stream
        return reject(new Error(`Type not allowed: ${mimeType}`));
      }

      const key = `uploads/${Date.now()}-${filename}`;
      const pass = new PassThrough();

      // Stream: client → busboy → passthrough → S3
      const upload = new Upload({
        client: s3,
        params: {
          Bucket: process.env.S3_BUCKET!,
          Key: key,
          Body: pass,
          ContentType: mimeType,
        },
      });

      let size = 0;
      fileStream.on('data', (chunk) => { size += chunk.length; });
      fileStream.pipe(pass);

      upload.done()
        .then(() => resolve({ key, size }))
        .catch(reject);
    });

    busboy.on('error', reject);
    req.pipe(busboy);
  });
}
// Express route using busboy
router.post('/upload', (req, res) => {
  handleUpload(req)
    .then(result => res.json({
      url: `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${result.key}`,
      size: result.size,
    }))
    .catch(err => res.status(400).json({ error: err.message }));
});

Option 3: Formidable (Framework-Agnostic)

npm install formidable
npm install -D @types/formidable
// Works with any Node.js HTTP server
import formidable from 'formidable';
import { IncomingMessage } from 'http';

async function parseUpload(req: IncomingMessage) {
  const form = formidable({
    maxFileSize: 10 * 1024 * 1024,  // 10MB
    maxFiles: 3,
    uploadDir: '/tmp/uploads',
    keepExtensions: true,
    filter: ({ mimetype }) => {
      return !!mimetype && mimetype.includes('image');
    },
  });

  const [fields, files] = await form.parse(req);
  return { fields, files };
}

// Express
router.post('/upload', async (req, res) => {
  try {
    const { files } = await parseUpload(req);
    const uploaded = files.file?.map(f => ({
      name: f.originalFilename,
      size: f.size,
      path: f.filepath,
    }));
    res.json({ files: uploaded });
  } catch (err) {
    res.status(400).json({ error: (err as Error).message });
  }
});

Frontend: Uploading Files

// React file upload with progress
function FileUpload() {
  const [progress, setProgress] = useState(0);

  const handleUpload = async (file: File) => {
    const formData = new FormData();
    formData.append('avatar', file);

    // Use XMLHttpRequest for progress tracking
    const xhr = new XMLHttpRequest();
    xhr.open('POST', '/api/upload');

    xhr.upload.addEventListener('progress', (event) => {
      if (event.lengthComputable) {
        setProgress(Math.round((event.loaded / event.total) * 100));
      }
    });

    xhr.onload = () => {
      if (xhr.status === 200) {
        const result = JSON.parse(xhr.responseText);
        console.log('Uploaded:', result);
      }
    };

    xhr.send(formData);
  };

  return (
    <div>
      <input
        type="file"
        accept="image/*"
        onChange={(e) => {
          const file = e.target.files?.[0];
          if (file) handleUpload(file);
        }}
      />
      {progress > 0 && <progress value={progress} max={100} />}
    </div>
  );
}

Security Checklist

// Security best practices for file uploads

// 1. Size limits (prevent DoS)
limits: { fileSize: 10 * 1024 * 1024 }  // Always set this

// 2. Type validation — check BOTH mimetype AND extension
const allowedTypes = ['image/jpeg', 'image/png'];
const allowedExts = ['.jpg', '.jpeg', '.png'];
const ext = path.extname(file.originalname).toLowerCase();
if (!allowedTypes.includes(file.mimetype) || !allowedExts.includes(ext)) {
  throw new Error('Invalid file type');
}

// 3. Rename files — never use originalname as filename
// ❌ Bad: uploads/user-photo.jpg (predictable, path traversal risk)
// ✅ Good: uploads/a3f8b2c1d4e5.jpg (random hex)

// 4. Store outside web root (or use S3)
// ❌ /public/uploads/ (directly accessible via URL)
// ✅ /private/uploads/ (serve via API with auth check)
// ✅ S3 with pre-signed URLs (best for production)

// 5. Virus scanning for production (ClamAV or VirusTotal API)
// 6. Image re-encoding strips EXIF and hidden payloads
//    sharp(buffer).jpeg({ quality: 90 }).toBuffer()  ← strips metadata

When to Choose

ScenarioPickReason
Express app, files to diskMulterSimple, well-documented
Streaming to S3, no diskBusboyLowest memory, fastest
Non-Express frameworkFormidableFramework agnostic
Memory processing (resize/crop)Multer memoryStorageBuffer in req.file.buffer
Large files (>50MB)BusboyStreaming prevents memory spikes
Multiple fields + filesMulterupload.fields() is clean

Direct-to-S3 Uploads with Presigned URLs

For production file upload systems handling files larger than a few megabytes, the standard pattern is to bypass your server entirely and upload directly from the browser to S3 using a presigned URL. Your server's job shrinks to generating the presigned URL and optionally verifying the upload completed — it never touches the file bytes themselves.

The flow: the browser requests a presigned URL from your API endpoint, the API generates a time-limited URL using the AWS SDK, the browser then POSTs the file directly to S3 using that URL, and finally the browser notifies your API that the upload is complete (or S3 sends an event notification). This eliminates the two main production problems with server-proxied uploads: memory pressure (large files no longer live in Node.js memory) and timeout risk (Vercel's 4.5-second serverless timeout cannot interrupt a browser-to-S3 transfer).

Server-side presigned URL generation with the AWS SDK v3:

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const s3 = new S3Client({ region: process.env.AWS_REGION });

export async function generatePresignedUrl(filename: string, contentType: string) {
  const key = `uploads/${Date.now()}-${crypto.randomUUID()}`;

  const command = new PutObjectCommand({
    Bucket: process.env.S3_BUCKET!,
    Key: key,
    ContentType: contentType,
    // Limit file size to 50MB via policy
    ContentLength: undefined, // enforce via bucket policy or conditions
  });

  const url = await getSignedUrl(s3, command, {
    expiresIn: 900, // 15 minutes
  });

  return { url, key };
}

Browser-side upload with fetch:

const { url, key } = await fetch('/api/upload-url', {
  method: 'POST',
  body: JSON.stringify({ filename: file.name, contentType: file.type }),
}).then(r => r.json());

await fetch(url, {
  method: 'PUT',
  body: file,
  headers: { 'Content-Type': file.type },
});

One required setup step: S3 CORS must be configured to allow browser-origin requests. Minimal CORS config for the bucket:

[{
  "AllowedHeaders": ["*"],
  "AllowedMethods": ["PUT", "POST"],
  "AllowedOrigins": ["https://yourdomain.com"],
  "ExposeHeaders": ["ETag"]
}]

If configuring AWS feels like overkill, UploadThing and Cloudflare R2 both offer presigned-URL upload APIs with simpler developer experience and competitive pricing. UploadThing in particular has a TypeScript SDK that handles the presigned URL lifecycle, file type validation, and size enforcement in one package.


Progress Tracking and Resumable Uploads

The native fetch API does not expose upload progress events — this is a well-known limitation and remains true in 2026. For a simple file upload where you do not need to show progress, fetch is fine. For anything where the user needs feedback on a slow upload, you need a different approach.

The most direct solution is axios, which wraps XMLHttpRequest and exposes onUploadProgress:

import axios from 'axios';

await axios.post('/api/upload', formData, {
  onUploadProgress: (event) => {
    const percent = Math.round((event.loaded / (event.total ?? 1)) * 100);
    setProgress(percent);
  },
});

For files above roughly 10MB, showing a progress bar is table stakes for good UX. For files above 100MB — video files, large datasets, disk images — simple multipart upload breaks down because any network interruption forces the user to restart. The solution is the tus resumable upload protocol and its tus-js-client library, which breaks the upload into chunks and tracks which chunks have been confirmed by the server. If the connection drops, the next attempt resumes from the last confirmed chunk rather than from zero.

import { Upload } from 'tus-js-client';

const upload = new Upload(file, {
  endpoint: 'https://tusd.tusdemo.net/files/',
  retryDelays: [0, 1000, 3000, 5000],
  metadata: { filename: file.name, filetype: file.type },
  onProgress: (loaded, total) => {
    setProgress(Math.round((loaded / total) * 100));
  },
  onSuccess: () => console.log('Upload complete'),
});

upload.start();

The practical thresholds: files under 10MB — use fetch or axios with no progress UI needed. Files 10-100MB — show a progress bar using axios. Files over 100MB — use tus-js-client or UploadThing's resumable upload feature.

For all upload scenarios, always attach an AbortController or upload.abort() call to a cancel button. Users who accidentally select the wrong file should not be forced to wait for a 50MB transfer to complete before trying again.


Production File Storage Architecture

The storage choice — disk, memory, or object storage — is the most consequential decision in file upload system design. Each has a different risk profile that becomes apparent only at scale.

Local disk storage is the simplest to implement but the worst choice for any production deployment. The problems are compounding: if your server scales horizontally, uploads to one instance are invisible to others (the uploaded file lives on server 1, but the user's next request might hit server 2). Disk fills up over time, requiring monitoring and cleanup jobs. Cloud platforms like Vercel, Railway, and Fly.io run your code in ephemeral containers where writes to disk do not persist across restarts. And backups become complicated — you need to backup both your database and a separate filesystem. For local development and single-server hobby projects, disk is fine. Production environments with any expectations of scale should use object storage from day one.

Memory storage is appropriate only for processing-then-forward patterns: accept the file, run a transformation (resize an image, extract metadata, convert format), and either forward to S3 or return a processed result. The risk is memory exhaustion — Node.js heaps are not designed for large binary buffers. A server handling 20 concurrent 10MB uploads with memory storage holds 200MB of file data in heap simultaneously. Set memory limits carefully and pair memory storage with a strict per-file size cap.

S3-compatible object storage is the correct default for production file uploads. AWS S3, Cloudflare R2, and Backblaze B2 all expose the same API. The cost structure has converged: most providers charge per-GB storage per month (~$0.02-0.023) and per-operation. Cloudflare R2's zero egress fee makes it meaningfully cheaper than S3 for read-heavy workloads. For most applications, the cost difference between providers is negligible at early scale, so optimizing for provider compatibility (S3 API compatibility) over cost is the right initial priority.

CDN integration is the complementary layer to object storage. Serving files directly from an S3 bucket URL is functional but slow for geographically distributed users and costly for high-traffic assets. A CDN like Cloudflare (free tier is generous), AWS CloudFront, or Bunny CDN caches files at edge locations globally. For images specifically, services like Cloudflare Images and imgix add on-the-fly resizing and format conversion at the CDN layer, eliminating the need for Sharp in your application layer entirely. The architecture becomes: upload to S3, serve via CDN with URL parameters for dimensions.

Handling Serverless Upload Constraints

Serverless environments introduce hard limits that change how file upload architecture works. Vercel's serverless functions have a maximum request body size of 4.5MB and a maximum execution time of 60 seconds (Pro plan). AWS Lambda's API Gateway has a 10MB request limit. These limits make server-proxied file uploads genuinely problematic at scale — a user uploading a 25MB video cannot upload through a serverless function at all.

The solution is presigned URLs for direct browser-to-S3 uploads, as covered in the section above. But it requires adjusting your mental model: your API server's role in file uploads changes from "receive and store" to "authorize and track." The API generates a presigned URL (a fast, simple operation well within serverless timeout limits), the browser uploads directly to S3 (bypassing your server entirely), and then either S3 event notifications or a follow-up browser request tells your API to record the completed upload.

One additional consideration for serverless file uploads: reading the request body for validation before proxying to S3 breaks streaming. If you need to validate the file type before accepting the upload, you have two options: validate client-side before sending (acceptable for type checks, not sufficient for security), or use S3's object metadata after upload and delete invalid files asynchronously. The second approach is more secure but requires a cleanup step. Lambda and Vercel functions that try to buffer a large request body for validation before re-streaming to S3 will exhaust memory or timeout — this is the most common failure mode in naive serverless upload implementations.

Chunked uploads are the alternative for environments where direct-to-S3 is not practical. Breaking a large file into 5MB chunks and uploading them serially allows each chunk to complete within the serverless timeout window. The server reassembles the chunks after all have arrived. This is more complex to implement correctly (tracking chunk state, handling partial failures) but avoids the CORS and presigned URL infrastructure required for direct-to-S3. UploadThing handles this chunking automatically, which is why it has become popular for Next.js applications that want serverless-compatible file uploads without implementing the chunking logic themselves. For most teams, UploadThing or a similar managed upload service represents the right level of abstraction over the raw Busboy/S3 combination.


Compare file upload package health on PkgPulse.

See also: Express vs NestJS and Express vs Koa, Decline of Express: What Developers Are Switching To.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.