Resourcesโ€บWeb Performanceโ€บLighthouse CI: Catch Performance Regressions Before They Ship
โšกWeb Performanceโ€” Lighthouse CI: Catch Performance Regressions Before They Shipโฑ 6 min

Lighthouse CI: Catch Performance Regressions Before They Ship

Set up Lighthouse CI to automatically measure Core Web Vitals on every pull request and block regressions from merging.

๐Ÿ“…February 8, 2026โœTechTwitter.ioweb-performancelighthouseci-cdautomation

Why Lighthouse CI?

Running Lighthouse manually in Chrome DevTools is reactive โ€” you catch problems after they're already deployed. Lighthouse CI makes performance a gate: PRs that regress your Core Web Vitals get blocked before they merge.


Setup in 10 Minutes

1. Install

npm install -D @lhci/cli

2. Create lighthouserc.json

{
  "ci": {
    "collect": {
      "url": ["http://localhost:3000/", "http://localhost:3000/about"],
      "startServerCommand": "npm run start",
      "numberOfRuns": 3
    },
    "assert": {
      "assertions": {
        "categories:performance": ["warn", { "minScore": 0.9 }],
        "categories:accessibility": ["error", { "minScore": 0.9 }],
        "first-contentful-paint": ["warn", { "maxNumericValue": 2000 }],
        "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
        "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
        "total-blocking-time": ["warn", { "maxNumericValue": 300 }]
      }
    },
    "upload": {
      "target": "temporary-public-storage"
    }
  }
}

3. Add GitHub Actions workflow

# .github/workflows/lighthouse.yml
name: Lighthouse CI

on:
  pull_request:
    branches: [main]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Build application
        run: npm run build

      - name: Run Lighthouse CI
        run: npx lhci autorun
        env:
          LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}

Understanding Assertion Levels

{
  "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }]
}
  • "error" โ€” fails the CI check (blocks PR merge)
  • "warn" โ€” posts a warning but doesn't block

Use error for the metrics you care about most (LCP, CLS), warn for softer targets.


Local Development Workflow

Run Lighthouse CI locally to debug before pushing:

# Build your app
npm run build

# Start it on port 3000
npm run start &

# Run LHCI
npx lhci collect --url=http://localhost:3000
npx lhci assert
npx lhci open  # opens the HTML report

Advanced: Performance Budgets

Set budgets on specific resource types:

{
  "ci": {
    "assert": {
      "budgets": [
        {
          "path": "/*",
          "timings": [
            { "metric": "interactive", "budget": 5000 }
          ],
          "resourceSizes": [
            { "resourceType": "script", "budget": 300 },
            { "resourceType": "image", "budget": 500 },
            { "resourceType": "total", "budget": 1000 }
          ]
        }
      ]
    }
  }
}

This blocks PRs that push your JavaScript bundle over 300KB or total page weight over 1MB.


Persistent Storage with LHCI Server

For team setups where you want to track trends over time:

# Self-host LHCI server with Docker
docker run -d \
  -p 9001:9001 \
  --name lhci-server \
  patrickhulce/lhci-server

# Then in lighthouserc.json
{
  "upload": {
    "target": "lhci",
    "serverBaseUrl": "http://your-lhci-server:9001"
  }
}

The LHCI server stores historical reports and shows trends โ€” you can see when performance changed and correlate with deploys.


Common Issues

Flaky results: Lighthouse scores vary between runs (network simulation, CPU throttling). Use numberOfRuns: 3 and Lighthouse CI will take the median.

Lab vs field data: Lighthouse uses simulated conditions (slow 4G, 4x CPU slowdown). Real users may see different numbers. Use Lighthouse CI for catching regressions, PageSpeed Insights for real-world data.

Build preview URLs: For Vercel/Netlify preview deployments, update the collect URL dynamically:

- name: Run Lighthouse CI
  run: npx lhci autorun --collect.url="$VERCEL_PREVIEW_URL"
  env:
    VERCEL_PREVIEW_URL: ${{ steps.deploy.outputs.url }}

Key Takeaways

  • Lighthouse CI blocks PRs that regress Core Web Vitals โ€” performance becomes a merge requirement
  • Configure with lighthouserc.json โ€” use error for hard limits, warn for soft ones
  • Run numberOfRuns: 3 to average out variance in Lighthouse scores
  • Set resource budgets to catch JS bloat before it ships
  • Self-host LHCI Server for trend tracking and historical reports