Lead Time to Production in Span
Last updated: April 23, 2026
Overview
Lead Time to Production (also called Lead Time for Changes) is one of the four DORA metrics and is the primary measure of engineering delivery speed. It answers the question: How quickly can your team get code from a developer's first commit into the hands of customers in production?
This metric gives R&D leaders a complete, end-to-end view of their software delivery pipeline — from the moment a developer starts writing code to the moment that code is live.
How It's Calculated
Lead Time to Production is measured as the elapsed time from a pull request's first commit to when that code is successfully deployed to production. It is measured in seconds and typically reported in hours or days.
The measurement is broken down into five sequential stages that together compose the full lifecycle of a pull request:
Stage | From → To | What it captures |
Coding | First commit → PR opened | Time spent writing code before opening a PR |
Awaiting First Review | PR opened → First review received | Time waiting for a reviewer to engage |
Reworking | First review → Last action | Time addressing reviewer feedback and iterating |
Idling | Last action → Merge | Time between final approval and the actual merge |
Deploying | Merge → Live in production | Time from merge to successful production deployment |
Lead Time = PR Cycle Time + Deploying Time. The key distinction from PR Cycle Time (which stops at merge) is that Lead Time captures the full customer-value delivery pipeline.
Aggregation
Span reports the metric using:
Trimmed mean — A 30-day cap is applied on individual PR cycle times to prevent extreme outliers from distorting the average.
Percentiles (P50, P75, P90) — Percentile aggregations give better visibility into your distribution. P50 shows the median, P75 shows your typical performance, and P90 reveals worst-case outliers.
Note: Only successful deployments (status: "success") to production environments are included in the calculation. Failed, pending, or non-production deployments are excluded.
Why You May Only See "First Commit to Merge"
If you navigate to the DORA dashboard and see First Commit to Merge instead of a full Lead Time to Production value, it means Span has not yet received any production deployment events.
Here's why:
What Span can measure | What's required |
Coding, Awaiting Review, Reworking, Idling (up to merge) | VCS integration only (GitHub, GitLab, or Azure DevOps) |
Deploying stage + full Lead Time to Production | VCS integration + deployment tracking via the Span DORA API |
First Commit to Merge is the PR Cycle Time — the first four stages of the lifecycle. It is a valuable metric in its own right, but it does not capture how long code sits between merge and production, which is often a significant and overlooked bottleneck.
To unlock the full Lead Time to Production metric, you need to instrument your CI/CD pipeline to send deployment events to Span. See the Configuration section below for how to do this.
Configuration
Lead Time to Production requires two integrations to be active:
A VCS integration (GitHub, GitLab, or Azure DevOps) — to capture commits, PRs, and merges.
Deployment tracking via the Span DORA API — to capture production deployment events.
Setting Up Deployment Tracking
To track deployments, you add a step in your CI/CD pipeline to send deployment events to Span via the DORA API[5]. Authentication is via a personal access token or service account.
Required Fields
Field | Description |
| A descriptive name for the deployment |
| ISO-8601 UTC timestamp of when the deployment started |
Key Optional Fields
Field | Description |
| The deployment environment (e.g., |
|
|
| ISO-8601 timestamp when the deployment finished |
|
|
| URL of the repository being deployed |
| Git ref or commit SHA being deployed |
| Array of PR numbers (useful for monorepo or partial deployments) |
| Array of Span service slugs for ownership attribution |
Environment Detection — What Counts as "Production"
Only deployments to production environments are counted in Lead Time calculations. Span uses the following rule:
✅ Counted: environments where the name is omitted, or contains
"prod"or"prd"(case-insensitive) — e.g.,"production","prod","PRD-us-east-1"❌ Excluded:
"staging","dev","test","qa","pre-prod", etc.
Common misconfiguration: If you are sending deployment events but still not seeing Lead Time to Production, check that your environment field matches the production naming rules above. A value like "pre-prod" or "staging" will be excluded from all metric calculations.
Ownership Attribution
For Lead Time to be attributed to the correct teams and services, deployments must be linked to pull requests. There are three ways to configure this[5]:
Services array (recommended) — Pass the
servicesfield; Span resolves team ownership via the service's configured owner.Explicit PR linking — Pass the
pullRequestsarray; Span infers ownership from PR authors and team membership.Automatic PR linking — Provide
git.refName; Span automatically links deployments to PRs (requires enablement by Span support).
Processing Time
Deployments are stored immediately after a successful API call, but enrichment (PR linking, lead time calculation) happens asynchronously. Expect full visibility within approximately 3 hours.
Where to Find It in Span
DORA Dashboard
The primary location for Lead Time to Production is the DORA dashboard:
In the left sidebar, navigate to Insights
Select Productivity
Click DORA
Or navigate directly to:
https://span.app/org/{your-org-slug}/reports/dora-v2
On this page you can:
Filter by date range, repositories, and services
Switch between Median, P75, and P90 aggregation views
Break down metrics by service or team
Compare metrics across time periods to track trends
Verify your deployment integration is active
Other Access Points
Custom Dashboards — Pin the metric to any custom dashboard in Span.
Quick Search — Use
Cmd+K(Mac) orCtrl+K(Windows/Linux) and search for "Lead Time for Changes".Team & Individual Pages — View the metric filtered to a specific team or contributor.
Best Practices
1. Diagnose by Stage, Not Just Total Time
A long Lead Time can originate from multiple places. Break down the metric by its five stages to identify the actual bottleneck before trying to fix it:
Long Coding time → PRs are too large, or work is insufficiently broken down.
Long Awaiting First Review → Insufficient review capacity, unclear review ownership, or notification gaps. This is typically the #1 bottleneck and can represent 40–60% of total cycle time.
Long Reworking → High back-and-forth due to code quality issues, unclear standards, or poor PR descriptions.
Long Idling → CI/CD pipeline delays, required approvals, or deployment scheduling constraints.
Long Deploying → Manual deployment steps, infrastructure constraints, or release gating processes.
2. Use Percentiles, Not Just the Average
The trimmed mean can mask problems. Always review P75 and P90 alongside the average. A healthy P50 alongside a high P90 signals that most work flows quickly but certain types of PRs (large refactors, cross-team work, etc.) are creating outliers worth investigating.
3. Benchmark Targets by Stage
Span's research-backed benchmarks for the Awaiting First Review stage[9]:
Performance Level | P75 target |
World-class | < 24 hours |
Healthy | < 48 hours |
Needs attention | > 48 hours |
For the Idling stage, anything greater than 24 hours typically signals CI/CD bottlenecks, required approval processes, or deployment scheduling constraints and warrants investigation.
4. Always Correlate with Quality Metrics
Lead Time should never be optimized in isolation. Short lead time is only valuable if you maintain acceptable quality. Track Lead Time alongside:
Change Failure Rate — Are faster deployments introducing more incidents?
Mean Time to Recover (MTTR) — Can your team respond quickly when things go wrong?
Bug rates and revert rates — Are quality signals trending alongside speed improvements?
5. Provide Context When Comparing Teams
Shorter reworking times are typical for bug fixes and documentation, while architectural changes and major refactors legitimately take longer. Avoid comparing teams that work on fundamentally different types of work without accounting for this context.
6. Track Trends Over Time
Absolute values matter less than direction. A team improving Lead Time from 2 weeks to 1 week is making meaningful progress even if they're not yet at elite levels. Use Span's trend view on the DORA dashboard to track directional improvement quarter-over-quarter.
DORA Performance Benchmarks
DORA Classification | Lead Time |
Elite | Less than one day |
High | Between one day and one week |
Medium | Between one week and one month |
Low | More than one month |
Related Metrics
Metric | Relationship to Lead Time |
PR Cycle Time / First Commit to Merge | Lead Time = PR Cycle Time + Deploying Time. Cycle time stops at merge; Lead Time continues to production. What you see before deployment tracking is configured. |
Deployment Frequency | Short lead times enable higher deployment frequency. Teams that ship smaller, faster also tend to deploy more often. |
Change Failure Rate | Both metrics must be healthy together. Fast delivery at the cost of reliability is not a win. |
Mean Time to Recover | Complements Lead Time. Quick lead times enable fast fixes; fast MTTR sustains confidence in frequent deployments. |