You push a one-line CSS fix. You wait. You check Slack. You scroll Twitter. You refill your coffee. Fourteen minutes later, your pipeline finally goes green. Sound familiar? That friction compounds into something devastating — and most teams don’t even notice it happening.
Slow CI doesn’t just waste minutes. It rewires how your team works. Engineers start batching changes because each push is expensive. They stop rebasing because merging is painful. They open fewer, larger PRs — which are harder to review, riskier to ship, and slower to merge. A 15-minute pipeline touched 30 times a day across a team of 10 is over 100 hours of wasted engineering time per week. That’s not a pipeline problem; that’s a velocity crisis.
“The goal is not to ship fast. The goal is to get feedback fast. Everything else follows.”
— Charity Majors
Why Speed Is the Only Metric That Matters Early On
Think of your CI pipeline like a conversation. You ask a question (push code), and CI gives an answer (pass or fail). When that answer takes 15 minutes, you’ve already context-switched to something else. You’re interrupted by the result, break flow, and lose focus. Multiply that across every engineer, every PR, every day. Teams with sub-5-minute pipelines consistently ship more — not because they type faster, but because the feedback loop is tight enough to stay in flow.
My rule: if your CI takes longer than making a coffee, it’s too slow.
The Anatomy of a Fast Pipeline
Every fast pipeline I’ve built follows the same shape. Here’s what each stage does and why it exists:
| Stage | What It Does | Why It Matters | Target Time |
|---|
| Change Detection | Identifies which packages/files actually changed | Skips unnecessary work in monorepos | < 10s |
| Install & Cache | Restores dependencies from cache, installs only if needed | Avoids re-downloading on every run | < 30s |
| Lint & Typecheck | Runs ESLint and TypeScript compiler in parallel | Catches syntax and type errors fast | < 60s |
| Unit Tests | Runs tests only for changed packages | Validates correctness without waste | < 90s |
| Build | Compiles production artifacts | Ensures the build isn’t broken | < 60s |
| Integration Tests | Runs against real services (DB, cache) | Catches cross-boundary issues | < 90s |
| Preview Deploy | Deploys a live preview URL for the PR | Enables visual QA without merging | Async |
The order here is intentional. Put the fastest, cheapest checks first. If linting fails in 10 seconds, there’s no reason to wait for a 90-second test suite to tell you the same PR is broken.
Caching: The Single Biggest Speed Win
Caching is the difference between a 12-minute pipeline and a 3-minute one. Here’s a simple mental model for how it works:
Every CI task takes inputs (your source code, dependencies, config) and produces outputs (compiled files, test results). If the inputs haven’t changed since the last run, the outputs are identical — so you skip the work and replay the cached result. That’s it. That’s the whole concept.
| Caching Strategy | What It Caches | Best For |
|---|
| Dependency cache | node_modules keyed by lockfile hash | Avoiding npm install on every run |
| Build cache | Compiled output keyed by source hash | Skipping unchanged package builds |
| Remote cache (Turborepo, Nx) | Task outputs shared across all engineers and CI | Massive savings in monorepos |
| Docker layer cache | Image layers keyed by instruction order | Faster container builds |
Remote caching is transformative. If any engineer — or any CI run — has already completed a task for the same input, every subsequent run skips it. On one project I worked on, remote caching reduced the average build step from over 3 minutes to under 15 seconds.
If your cache hit rate is below 70%, you likely have non-deterministic inputs leaking in — timestamps, random values, or environment-specific paths. Run your build tool’s dry-run mode locally to diagnose what’s invalidating the cache.
One YAML to Show the Shape
Here’s a minimal GitHub Actions workflow that demonstrates the core ideas — concurrency groups, path filtering, and parallelism:
name: CI
on:
pull_request:
branches: [main]
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: true
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: 'pnpm' }
- run: pnpm install --frozen-lockfile
- run: pnpm turbo lint typecheck --cache-dir=.turbo
The concurrency block cancels stale runs when you push again. Dependency caching avoids redundant installs. Turborepo skips unchanged tasks. Three ideas, four lines of configuration each.
The “Green Main” Principle
Main should always be deployable. Not “usually green” — always. This requires two non-negotiable practices:
- Required status checks. No PR merges without passing CI. No exceptions.
- A merge queue. Each PR gets rebased onto the latest main and retested before merging. This prevents the “two green PRs that conflict when merged” problem.
| Practice | What It Prevents | Cost |
|---|
| Required status checks | Broken code reaching main | Zero — just a settings toggle |
| Merge queue | Integration conflicts between PRs | 2-3 minutes per merge |
| Branch protection | Force pushes, untested merges | Minor process friction |
A broken main costs hours. A merge queue costs minutes. The math is obvious.
Pipeline Health: What to Track
You can’t improve what you don’t measure. Track these four numbers weekly and make them visible:
| Metric | Target | What It Tells You |
|---|
| P50 pipeline duration | < 4 min | Typical developer experience |
| P95 pipeline duration | < 8 min | Worst-case experience |
| Flaky test rate | < 1% | Whether engineers trust the pipeline |
| Cache hit rate | > 80% | Whether caching is actually working |
When the P95 creeps above 8 minutes, someone investigates that week — not next sprint. CI speed is a leading indicator of team health, and visibility alone changes behaviour.
Display these metrics somewhere the team sees them daily — a dashboard on a shared screen, a weekly Slack digest, a section in your retro. When pipeline speed becomes visible, engineers naturally start caring about test performance and build times.
The Compound Effect
A fast, reliable pipeline changes team culture. Engineers push small PRs because they know they’ll get feedback in minutes. They rebase often because merging is painless. They write better tests because the feedback loop is tight enough to iterate. Every hour you invest in CI speed pays back tenfold. It’s the highest-leverage infrastructure work a team can do.