TL;DR
Chromatic is the visual testing platform for Storybook — automatic snapshot testing, component-level diffing, UI review workflows, built by the Storybook team. Percy (by BrowserStack) is the visual review platform — pixel-by-pixel comparison, responsive testing, cross-browser, CI integration, works with any framework. Applitools is the AI-powered visual testing platform — Visual AI that ignores irrelevant differences, cross-browser, Ultrafast Grid, enterprise-grade visual validation. In 2026: Chromatic for Storybook component testing, Percy for cross-browser visual regression, Applitools for AI-powered visual validation.
Key Takeaways
- Chromatic: chromatic ~200K weekly downloads — Storybook-native, component snapshots, UI review
- Percy: @percy/cli ~100K weekly downloads — pixel comparison, responsive, cross-browser
- Applitools: @applitools/eyes-cypress ~50K weekly downloads — Visual AI, Ultrafast Grid
- Chromatic integrates natively with Storybook for component-level testing
- Percy provides the most straightforward pixel-by-pixel comparison
- Applitools uses AI to detect only meaningful visual changes
Chromatic
Chromatic — visual testing for Storybook:
Setup
npm install --save-dev chromatic
# First run (connect to Chromatic):
npx chromatic --project-token=YOUR_TOKEN
Storybook stories (test targets)
// src/components/PackageCard.stories.tsx
import type { Meta, StoryObj } from "@storybook/react"
import { PackageCard } from "./PackageCard"
const meta: Meta<typeof PackageCard> = {
component: PackageCard,
title: "Components/PackageCard",
parameters: {
chromatic: {
viewports: [375, 768, 1200], // Test at multiple viewports
delay: 300, // Wait 300ms before snapshot
},
},
}
export default meta
type Story = StoryObj<typeof PackageCard>
export const Default: Story = {
args: {
name: "react",
description: "UI library for building interfaces",
downloads: 25000000,
version: "19.0.0",
tags: ["frontend", "ui"],
},
}
export const WithLongDescription: Story = {
args: {
...Default.args,
description: "A very long description that tests how the card handles overflow text in the description area when content exceeds the expected length",
},
}
export const NoTags: Story = {
args: {
...Default.args,
tags: [],
},
}
export const Loading: Story = {
args: {
...Default.args,
isLoading: true,
},
}
// Interaction test (Chromatic captures after interaction):
export const Expanded: Story = {
args: Default.args,
play: async ({ canvasElement }) => {
const canvas = within(canvasElement)
await userEvent.click(canvas.getByRole("button", { name: "Show Details" }))
},
}
CI integration
# .github/workflows/chromatic.yml
name: Chromatic
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
chromatic:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Required for Chromatic to detect changes
- uses: actions/setup-node@v4
with:
node-version: 20
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Run Chromatic
uses: chromaui/action@latest
with:
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
exitZeroOnChanges: true # Don't fail CI on visual changes
autoAcceptChanges: main # Auto-accept on main branch
onlyChanged: true # Only test changed stories
Configuration
// .storybook/main.ts
import type { StorybookConfig } from "@storybook/react-vite"
const config: StorybookConfig = {
stories: ["../src/**/*.stories.@(js|jsx|ts|tsx)"],
addons: ["@storybook/addon-essentials"],
framework: "@storybook/react-vite",
}
export default config
// Component-level Chromatic settings:
// Ignore specific stories:
export const AnimatedComponent: Story = {
parameters: {
chromatic: { disableSnapshot: true }, // Skip this story
},
}
// Diff threshold:
export const SubtleChange: Story = {
parameters: {
chromatic: { diffThreshold: 0.2 }, // Allow 20% pixel difference
},
}
// Pause animations:
export const WithAnimation: Story = {
parameters: {
chromatic: { pauseAnimationAtEnd: true },
},
}
Percy
Percy — visual review platform:
Setup with Cypress
npm install --save-dev @percy/cli @percy/cypress
// cypress/e2e/visual.cy.ts
describe("Package Comparison Page", () => {
it("renders the comparison table", () => {
cy.visit("/compare?packages=react,vue")
cy.get("[data-testid=comparison-table]").should("be.visible")
cy.percySnapshot("Comparison Table")
})
it("renders dark mode", () => {
cy.visit("/compare?packages=react,vue")
cy.get("[data-testid=theme-toggle]").click()
cy.percySnapshot("Comparison Table - Dark Mode")
})
it("renders responsive layouts", () => {
cy.visit("/compare?packages=react,vue")
cy.percySnapshot("Comparison - Desktop", { widths: [1280] })
cy.percySnapshot("Comparison - Tablet", { widths: [768] })
cy.percySnapshot("Comparison - Mobile", { widths: [375] })
})
it("renders loading state", () => {
cy.intercept("GET", "/api/packages/*", { delay: 10000 }).as("slowApi")
cy.visit("/compare?packages=react,vue")
cy.percySnapshot("Comparison - Loading State")
})
})
Setup with Playwright
npm install --save-dev @percy/cli @percy/playwright
// tests/visual.spec.ts
import { test } from "@playwright/test"
import percySnapshot from "@percy/playwright"
test.describe("Visual regression tests", () => {
test("homepage", async ({ page }) => {
await page.goto("/")
await page.waitForSelector("[data-testid=package-list]")
await percySnapshot(page, "Homepage")
})
test("package detail page", async ({ page }) => {
await page.goto("/packages/react")
await page.waitForSelector("[data-testid=package-detail]")
await percySnapshot(page, "Package Detail - React")
})
test("search results", async ({ page }) => {
await page.goto("/")
await page.fill("[data-testid=search-input]", "state management")
await page.waitForSelector("[data-testid=search-results]")
await percySnapshot(page, "Search Results - State Management")
})
test("responsive comparison", async ({ page }) => {
await page.goto("/compare?packages=react,vue,svelte")
// Percy handles responsive widths:
await percySnapshot(page, "Three-way Comparison", {
widths: [375, 768, 1024, 1440],
})
})
})
CI integration
# .github/workflows/percy.yml
name: Percy Visual Tests
on:
pull_request:
branches: [main]
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: pnpm
- run: pnpm install --frozen-lockfile
- run: pnpm build
- name: Percy with Playwright
run: npx percy exec -- npx playwright test tests/visual/
env:
PERCY_TOKEN: ${{ secrets.PERCY_TOKEN }}
# Percy Configuration:
# .percy.yml
# version: 2
# snapshot:
# widths: [375, 768, 1280]
# minHeight: 1024
# percyCSS: |
# .animated { animation: none !important; }
# [data-testid="timestamp"] { visibility: hidden; }
Storybook integration
npm install --save-dev @percy/storybook
# Snapshot all Storybook stories:
npx percy storybook ./storybook-static
# With filtering:
npx percy storybook ./storybook-static \
--include="Components/**" \
--exclude="**/Playground*"
// In stories — Percy-specific parameters:
export const Default: Story = {
args: { /* ... */ },
parameters: {
percy: {
widths: [375, 768, 1280],
skip: false,
additionalSnapshots: [
{ name: "Dark Mode", args: { theme: "dark" } },
],
},
},
}
Applitools
Applitools — AI-powered visual testing:
Setup with Cypress
npm install --save-dev @applitools/eyes-cypress
npx eyes-setup
// cypress/e2e/visual.cy.ts
describe("Visual AI Testing", () => {
beforeEach(() => {
cy.eyesOpen({
appName: "PkgPulse",
batchName: "Package Comparison",
})
})
afterEach(() => {
cy.eyesClose()
})
it("validates comparison page layout", () => {
cy.visit("/compare?packages=react,vue")
cy.get("[data-testid=comparison-table]").should("be.visible")
// Visual AI check — ignores irrelevant differences:
cy.eyesCheckWindow({
tag: "Comparison Table",
target: "window",
fully: true, // Capture full page
})
})
it("validates with different match levels", () => {
cy.visit("/packages/react")
// Strict — pixel-perfect:
cy.eyesCheckWindow({
tag: "Package Detail - Strict",
matchLevel: "Strict",
})
// Layout — only structure matters:
cy.eyesCheckWindow({
tag: "Package Detail - Layout",
matchLevel: "Layout",
})
// Content — text content must match:
cy.eyesCheckWindow({
tag: "Package Detail - Content",
matchLevel: "Content",
})
})
it("validates responsive across browsers", () => {
cy.visit("/")
// Ultrafast Grid — test across browsers and viewports:
cy.eyesCheckWindow({
tag: "Homepage",
target: "window",
fully: true,
})
})
})
// applitools.config.js
module.exports = {
testConcurrency: 5,
browser: [
// Ultrafast Grid — all tested in parallel:
{ width: 1200, height: 800, name: "chrome" },
{ width: 1200, height: 800, name: "firefox" },
{ width: 1200, height: 800, name: "safari" },
{ width: 768, height: 1024, name: "chrome" },
{ width: 375, height: 812, name: "chrome", deviceName: "iPhone 14" },
],
batchName: "PkgPulse Visual Tests",
}
Setup with Playwright
npm install --save-dev @applitools/eyes-playwright
// tests/visual-ai.spec.ts
import { test } from "@playwright/test"
import { BatchInfo, Configuration, Eyes, Target } from "@applitools/eyes-playwright"
let eyes: Eyes
test.beforeAll(async () => {
const config = new Configuration()
config.setBatch(new BatchInfo("PkgPulse"))
config.addBrowser(1200, 800, "chrome")
config.addBrowser(1200, 800, "firefox")
config.addBrowser(768, 1024, "chrome")
config.addDeviceEmulation("iPhone 14")
eyes = new Eyes()
eyes.setConfiguration(config)
})
test.afterAll(async () => {
await eyes.abortAsync()
})
test("homepage visual validation", async ({ page }) => {
await eyes.open(page, "PkgPulse", "Homepage")
await page.goto("/")
await page.waitForSelector("[data-testid=package-list]")
// AI check — full page:
await eyes.check("Homepage", Target.window().fully())
// Check specific region:
await eyes.check(
"Navigation",
Target.region("[data-testid=navbar]")
)
// Check with floating regions (ignore dynamic content):
await eyes.check(
"Package List",
Target.region("[data-testid=package-list]")
.floating({ element: "[data-testid=download-count]", maxOffset: { top: 5, bottom: 5 } })
.ignore("[data-testid=timestamp]")
)
await eyes.close()
})
test("dark mode validation", async ({ page }) => {
await eyes.open(page, "PkgPulse", "Dark Mode")
await page.goto("/")
await page.click("[data-testid=theme-toggle]")
await eyes.check("Dark Mode Homepage", Target.window().fully())
await eyes.close()
})
CI integration
# .github/workflows/applitools.yml
name: Applitools Visual Tests
on:
pull_request:
branches: [main]
jobs:
visual-ai:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: pnpm
- run: pnpm install --frozen-lockfile
- run: pnpm build
- name: Run Applitools tests
run: npx playwright test tests/visual-ai/
env:
APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}
Feature Comparison
| Feature | Chromatic | Percy | Applitools |
|---|---|---|---|
| Testing approach | Component (Storybook) | Page/component | Page/component |
| Comparison method | Pixel diff | Pixel diff | Visual AI |
| Storybook integration | ✅ (native) | ✅ (addon) | ✅ (addon) |
| Playwright support | Via Storybook | ✅ (native) | ✅ (native) |
| Cypress support | Via Storybook | ✅ (native) | ✅ (native) |
| Cross-browser | Chrome only | Chrome, Firefox, Safari | 100+ (Ultrafast Grid) |
| Responsive testing | ✅ (viewports) | ✅ (widths) | ✅ (devices + browsers) |
| AI-powered | ❌ | ❌ | ✅ (Visual AI) |
| Ignore regions | ✅ | ✅ | ✅ (AI auto-detect) |
| Match levels | Pixel | Pixel | Strict, Layout, Content |
| UI review workflow | ✅ (built-in) | ✅ (dashboard) | ✅ (dashboard) |
| PR integration | ✅ (GitHub) | ✅ (GitHub, GitLab) | ✅ (GitHub, GitLab) |
| Free tier | 5K snapshots/month | 5K snapshots/month | Free for OSS |
| Pricing | Snapshot-based | Snapshot-based | Checkpoint-based |
When to Use Each
Use Chromatic if:
- Using Storybook for component development
- Want component-level visual testing integrated with stories
- Need UI review workflows for design approvals
- Prefer the tightest Storybook integration
Use Percy if:
- Need page-level visual regression with any testing framework
- Want straightforward pixel-by-pixel comparison
- Building cross-browser visual tests with Playwright or Cypress
- Prefer a simple, framework-agnostic visual testing solution
Use Applitools if:
- Need AI-powered visual testing that ignores irrelevant differences
- Want to test across 100+ browser/device combinations
- Building enterprise applications that need comprehensive visual validation
- Prefer Layout match level to avoid false positives from dynamic content
Snapshot Management and Review Workflows
The day-to-day workflow for visual regression testing centers on how teams review and approve visual changes in pull requests. Chromatic integrates directly with GitHub, GitLab, and Bitbucket to post PR comments showing which stories changed and link to a visual diff review UI where reviewers can accept or reject individual changes. The workflow is optimized for component-level granularity — each story that changes gets its own approval decision, which is efficient when you have hundreds of components and only a few change per PR. Percy's review workflow is page-level, showing a side-by-side or diff overlay for each snapshot, and decisions are made per-snapshot in the Percy dashboard with the PR check reflecting the aggregated status. Applitools provides the most granular review tooling: the Test Manager groups test results by baseline, and the AI match engine presents only the differences it considers significant, filtering out rendering noise automatically. This AI filtering meaningfully reduces the review burden for teams with highly dynamic content.
Handling Dynamic Content and Flaky Snapshots
Dynamic content — timestamps, random user avatars, animated components, lazy-loaded images — is the primary cause of false positive failures in visual regression testing. Each platform handles this differently. In Chromatic, the recommended approach is to use chromatic: { pauseAnimationAtEnd: true } for animated components and mock data with fixed values in stories. The disableSnapshot parameter completely skips a story that cannot be made deterministic, which is a pragmatic escape hatch. Percy handles dynamic regions through its percyCSS configuration, where you can hide elements by selector during snapshot capture — [data-testid="timestamp"] { visibility: hidden } is a common pattern. Applitools' Visual AI approach is architecturally different: rather than hiding elements, you define floating regions (areas where small position shifts are acceptable) and ignore regions (areas that change but shouldn't affect the test result), and the AI learns to distinguish meaningful layout changes from rendering variations over time.
Cost Optimization and Snapshot Budgets
Visual regression testing at scale can become expensive if snapshot counts are not managed carefully. Chromatic charges per snapshot, and the most common cost control strategy is the onlyChanged flag, which only captures stories that have changed files in the current commit. This dramatically reduces the snapshot count on PRs that touch only a few components while maintaining full coverage on the main branch. Percy's pricing is also per-snapshot, and using width-specific snapshots rather than testing every breakpoint for every page is an effective budget control — prioritize mobile, tablet, and desktop for high-traffic pages and test only desktop for admin or low-traffic routes. Applitools charges per checkpoint (equivalent to a snapshot in a specific browser/viewport combination), and their Ultrafast Grid's parallel browser testing can multiply the checkpoint count quickly when testing multiple browsers. For budget-conscious teams, starting with a single browser (Chrome) and expanding coverage only where cross-browser issues have been historically problematic is a pragmatic approach.
Integration with Design Systems and Component Libraries
Visual regression testing is particularly valuable for design system teams that need to validate that component library changes don't unexpectedly affect consumers. Chromatic is purpose-built for this use case — design system packages can run Chromatic in isolation, and the team can use the Storybook composition feature to embed consumer application stories alongside design system stories in a single Chromatic project. This allows the design system team to see exactly how a component change propagates visually through consuming applications before merging. Percy and Applitools are also effective for design system validation, typically integrated at the application level where end-to-end tests capture page-level regressions that include design system components in their natural context. For design system publishers who maintain npm packages consumed by multiple teams, Chromatic's per-project story-level tracking with cross-project story embedding provides the deepest visibility into the visual impact of library changes.
Performance Impact on CI Pipeline Duration
Visual testing adds meaningful time to CI pipelines, and understanding the performance characteristics of each platform helps with pipeline optimization. Chromatic uploads story snapshots to its cloud infrastructure for rendering, which means CI pipeline time depends primarily on the Storybook build time and file upload time, not on local browser execution. The onlyChanged optimization can reduce build-to-upload time by skipping unchanged stories. Percy executes snapshot capture locally in your CI runner (using Playwright or Cypress), which adds browser startup and page rendering time but gives more control over test parallelization. Applitools Ultrafast Grid moves rendering to Applitools' cloud infrastructure — your CI runner just sends DOM snapshots rather than rendered images, and Applitools renders them in parallel across all configured browsers and viewports simultaneously. For multi-browser visual testing, Applitools' upload-and-render approach typically produces faster total pipeline times than local browser testing with Percy when testing four or more browser configurations.
Baseline Management and Branch Strategy
Visual regression tools maintain baselines — reference snapshots that new snapshots are compared against — and the branching strategy for how baselines are managed across feature branches affects the day-to-day workflow for development teams. Chromatic's baseline management is tightly integrated with git: the baseline for a feature branch is the accepted snapshot from the branch's git merge-base with the main branch, which means feature branches always compare against the most recent accepted version of each story on the branch point, not the current main snapshot. This prevents false positives when multiple feature branches are in flight simultaneously, each making independent visual changes. Percy uses a similar ancestry-based baseline approach. Applitools' Baseline Cloud stores baselines keyed by test name, branch name, and optional environment tag, and provides explicit APIs for accepting or cloning baselines across branches. For teams with active main branches where visual changes are merged frequently, the branch-aware baseline model significantly reduces the volume of spurious visual diffs that need human review — a diff that appears on every feature branch because the main baseline changed is much less useful than diffs that appear only on the specific branch that caused the change.
Methodology
Download data from npm registry (weekly average, February 2026). Feature comparison based on chromatic v11.x, @percy/cli v1.x, and @applitools/eyes-cypress v3.x.
Compare testing tools and developer tooling on PkgPulse →
See also: Lit vs Svelte and AVA vs Jest, Bun Test vs Vitest vs Jest 2026: Speed Compared.