Deployments / Week

Last updated: January 7, 2026

Overview

The Deployments / Week report measures deployment frequency to production, expressed as the average number of successful deployments per week. This is a key DORA (DevOps Research and Assessment) metric that indicates your organization's deployment velocity and continuous delivery maturity.

What Does This Metric Measure?

Definition

Deployments / Week calculates how frequently your organization successfully deploys changes to production environments.

Key Characteristics:

  • Production deployments only: Counts successful deployments to production

  • Per week basis: Normalized to weekly frequency for consistent comparison

  • Success-based: Only includes deployments that completed successfully

  • Key DORA metric: One of four metrics measuring software delivery performance

What It Tells You

This metric indicates:

  •  Deployment agility — How quickly you can ship changes to production

  •  CI/CD maturity — Sophistication of your deployment automation

  •  Feedback loop speed — How fast you can validate changes with real users

  •  Risk per deployment — Smaller, frequent deployments are typically lower risk

  •  Customer value delivery — How often new features/fixes reach users

What It Doesn't Tell You

This metric does NOT measure:

  •  Deployment quality — Use Change Failure Rate for quality

  •  Deployment impact — All deployments counted equally regardless of size

  •  Deployment risk — Use MTTR and Incident Rate for reliability

  •  Business value — Use product analytics for customer impact

  •  Individual contributions — Deployments are team/organization activities

How It's Calculated

Formula

Deployments / Week = (Total Successful Production Deployments / Days in Period) × 7

Components Breakdown

1. Numerator: Total Successful Production Deployments

  • Counts deployments that meet ALL criteria:

    • Status = Success (completed successfully)

    • Environment contains production indicators:

      • Environment is NULL (defaults to production), OR

      • Environment contains "prod" (case-insensitive), OR

      • Environment contains "prd" (case-insensitive)

Excluded:

  • Failed deployments

  • Pending/in-progress deployments

  • Non-production deployments (staging, dev, test)

2. Denominator: Days in Period

  • Number of calendar days in your selected time range

  • Example: 7 days, 14 days, 30 days, etc.

3. Weekly Normalization (× 7)

  • Converts daily average to weekly metric for consistent comparison

Example Calculation

Scenario 1: High-frequency deployment

  • 28 successful production deployments over 14 days

  • Calculation: (28 / 14) × 7 = 14.0 deployments/week

Scenario 2: Weekly deployment cadence

  • 4 successful production deployments over 28 days

  • Calculation: (4 / 28) × 7 = 1.0 deployment/week

Scenario 3: Monthly deployment

  • 2 successful production deployments over 60 days

  • Calculation: (2 / 60) × 7 = 0.23 deployments/week

Accessing the Report

Navigation Path

Primary Route:

  1. Login to Span

  2. Click "Productivity" in main navigation

  3. Select "DORA" 

  4. Find "Deployments / Week" metric card

Alternative Access:

  • Dashboard Widgets: Add to organization/team dashboards

  • DORA Dashboard: View alongside other DORA metrics

  • Metric Explorer: Search for "deployments per week" or "deployment frequency"

Available Filters

Filter

Purpose

Service

Filter to specific services/applications

Repository

Filter to specific Git repositories

Team/Group

Filter to organizational units (if configured)

Environment

Already filtered to production (automatic)

Time Filters

  • Last 7 days

  • Last 2 weeks

  • Last 4 weeks (common default)

  • Last 3 months

  • Last 6 months

  • Last 12 months

  • Custom date range

Special Considerations

Automatic Filters (cannot be changed):

  • Status = "Success" only

  • Production environments only

Not Available:

  • Person/Individual filters (deployments are organizational activities)

  • Deployment type/size filters

Interpreting the Data

DORA Performance Benchmarks

Based on DORA research, deployment frequency classifies teams into performance tiers:

Performance Tier

Deployments/Week

Typical Characteristics

Elite

10+ per week (multiple per day)

On-demand deployments, advanced automation, feature flags

High

1-7 per week

Regular deployment windows, solid CI/CD pipeline

Medium

0.25-1 per week (1-4 per month)

Planned releases, some manual processes

Low

< 0.25 per week (< 1 per month)

Infrequent releases, manual/risky deployments

Context-Based Interpretation

High Frequency (7-20+ per week)

  •  Positive: Mature CI/CD, fast feedback loops, continuous delivery

  •  Caution: Must pair with low Change Failure Rate (< 15%)

  • Consider: Are deployments well-tested? Is quality maintained?

Moderate Frequency (1-7 per week)

  •  Healthy: Regular release cadence, planned deployments

  • 📊 Typical: Common for most well-managed engineering teams

  • Consider: Opportunities to increase frequency with automation

Low Frequency (< 1 per week)

  •  Concerning: Infrequent deployments indicate process barriers

  • 🔍 Investigate: Manual processes, lack of automation, risk aversion

  • Consider: What's blocking more frequent deployments?

Very Low Frequency (< 1 per month)

  • 🚨 Red Flag: Significant delivery bottlenecks

  • Common Causes: Legacy systems, heavy manual processes, batch releases

  • Action Required: Major process improvements needed

Industry-Specific Context

SaaS/Web Applications

  • Target: 5-15+ deployments/week

  • Expectation: Continuous deployment capability

Mobile Applications

  • Target: 1-4 deployments/week

  • Constraint: App store review processes

Infrastructure/Platform Teams

  • Target: 1-5 deployments/week

  • Nature: Changes are typically larger, more cautious

Regulated Industries (Finance, Healthcare)

  • Target: 1-4 deployments/week (or less)

  • Constraint: Compliance requirements, change management

Legacy Systems

  • Target: 1-2 deployments/month

  • Challenge: Technical debt, manual processes

Trend Analysis

Upward Trend (Increasing Frequency)

  • Improving CI/CD maturity

  • Better automation

  • Smaller batch sizes

  • Increased confidence

Downward Trend (Decreasing Frequency)

  • Process degradation

  • Increased risk aversion (post-incident)

  • Team changes or transitions

  • Technical debt accumulation

Flat/Stable Trend

  • 📊 Consistent, predictable deployment cadence

  • 📊 Mature, stable process

  • 🔍 May indicate room for optimization if low

Spikes and Valleys

  • 📊 Release cycles or sprint patterns

  • Incident-driven deployment patterns

  • 🔍 Irregular deployment rhythm

\

Use Cases

1. Continuous Delivery Maturity Assessment

Scenario: Evaluate your organization's CI/CD maturity

How to Use:

  • Establish baseline deployment frequency

  • Compare against DORA benchmarks

  • Set improvement targets (e.g., move from Medium to High performer)

  • Track progress quarterly

Example:

  • Current: 0.5 deployments/week (Medium performer)

  • Goal: 2 deployments/week (High performer)

  • Initiatives: Automate testing, implement feature flags

2. Team Performance Comparison

Scenario: Compare deployment practices across teams

Analysis:

  • Team A: 10 deployments/week, 5% failure rate → Elite performer

  • Team B: 2 deployments/week, 15% failure rate → High performer

  • Team C: 0.3 deployments/week, 25% failure rate → Needs improvement

Action: Share Team A's practices with other teams

3. Process Improvement Tracking

Scenario: Measure impact of CI/CD investments

Timeline:

  • Before: 1 deployment/week, manual testing

  • Initiative: Implement automated testing, deployment pipelines

  • After: 5 deployments/week

  • Result: 5x improvement in deployment frequency

4. Release Planning & Capacity

Scenario: Understand feature delivery capacity

Analysis:

  • Current: 3 deployments/week = ~12 deployments/month

  • Roadmap: 20 features planned for quarter

  • Insight: ~1.5 features per deployment → Feasible

  • Adjustment: Prioritize feature bundling strategy

5. Post-Incident Recovery Tracking

Scenario: Monitor deployment confidence after major incident

Pattern:

  • Pre-incident: 5 deployments/week

  • Incident week: 1 deployment/week (caution)

  • Week 1 post-incident: 2 deployments/week (rebuilding confidence)

  • Week 4 post-incident: 4 deployments/week (nearly recovered)

Action: Track recovery to normal deployment cadence

6. DORA Metrics Holistic View

Scenario: Evaluate overall software delivery performance

Combined View:

Metric

Value

Assessment

Deployments/Week

8

Elite

Change Failure Rate

18%

Medium

MTTR

45 min

High

Incidents/Week

1.2

Medium

Insight: High deployment frequency but moderate quality → Focus on testing improvements

Important Considerations & Limitations

What This Metric Does Well

 Objective measurement — Based on actual deployment events

 Industry-recognized — DORA research-backed metric

 Actionable — Clear link to CI/CD process improvements

 Benchmarkable — Compare against industry standards

 Trend visibility — Easy to track progress over time

Critical Limitations

 Does Not Measure Quality

  • High frequency without low failure rate = risky practice

  • Must pair with: Change Failure Rate, MTTR, Incident Rate

  • Action: Always view in DORA dashboard context

 Environment Detection Dependency

  • Requires deployments tagged with production environment

  • Misconfigured environments = inaccurate counts

  • Action: Validate environment field is properly populated

 All Deployments Counted Equally

  • Small config change = major feature release

  • No weighting by deployment size or impact

  • Action: Use additional context (commit volume, change scope)

 Does Not Show Individual Contributions

  • Deployments are team/organizational activities

  • Cannot attribute to specific developers

  • Action: Use PR/commit metrics for individual contributions

 Incomplete Period Handling

  • Current incomplete week shows different frequency

  • Don't compare partial weeks to complete weeks

  • Action: Use completed periods for trend analysis

 Integration Dependencies

  • Requires active, properly-configured deployment integration

  • Missing/failed webhooks = inaccurate counts

  • Action: Monitor integration health regularly

Data Quality Requirements

Required Data:

  • Deployment triggered timestamp

  • Deployment status (Success/Failure)

  • Environment identifier (production indicators)

  • Continuous event flow

Common Issues:

  • Deployment events not being captured

  • Environment field missing or incorrect

  • Status not properly updated

  • Deployment system not integrated

Data Sources & Integration Requirements

Required Integrations

Custom Webhooks/API

  • Span's deployment webhook endpoint

  • REST API for programmatic deployment reporting

DORA API: see here for documentation on how to set this up.

Configuration Checklist

Before using this metric, ensure:

  •  Deployment integration is active

  •  Deployment events are flowing (check logs)

  •  Environment field contains "prod" or "prd" for production

  •  Status field correctly marks success/failure

  •  Deployment timestamps are accurate

  •  Test deployment appears in Span UI

Validation Steps

1. Verify Data Flow

  • Navigate to: Assets > Deployments

  • Confirm recent production deployments appear

  • Check timestamps and status values

2. Test Calculation

  • Manually count deployments in a known period

  • Compare to Span's reported metric

  • Investigate discrepancies

3. Monitor Integration Health

  • Check integration status in Settings

  • Review webhook/API logs for errors

  • Set up alerts for integration failures

Best Practices

For Engineering Teams

Increase Deployment Frequency:

  1. Automate testing — Reduce manual QA bottlenecks

  2. Implement feature flags — Deploy without exposing features

  3. Reduce PR size — Smaller changes = faster reviews = more deployments

  4. Parallelize pipelines — Speed up CI/CD execution

  5. Use trunk-based development — Eliminate long-lived branches

Maintain Quality:

  1. Monitor Change Failure Rate — Don't sacrifice quality for speed

  2. Implement automated rollbacks — Quick recovery from failures

  3. Use canary deployments — Gradual rollout reduces risk

  4. Maintain test coverage — Catch issues before production

For Engineering Leaders

Set Appropriate Goals:

  • Consider industry context and team maturity

  • Don't aim for elite-level without infrastructure

  • Celebrate incremental improvements (Medium → High)

Use in Context:

  • Never evaluate deployment frequency alone

  • Always review with other DORA metrics

  • Understand team-specific constraints

Track and Communicate:

  • Share progress regularly with stakeholders

  • Celebrate improvements publicly

  • Investigate decreases promptly

Invest Strategically:

  • Prioritize automation ROI

  • Balance speed with stability

  • Address root causes, not symptoms

For Platform/DevOps Teams

Enable High Frequency:

  • Optimize CI/CD pipeline performance

  • Provide self-service deployment tools

  • Create deployment templates/blueprints

  • Monitor and improve deployment success rates

Ensure Observability:

  • Track deployment events accurately

  • Maintain deployment history

  • Provide deployment dashboards

  • Alert on deployment anomalies

Related Metrics

View these alongside Deployments / Week for complete understanding:

Metric

Relationship

Combined Interpretation

Change Failure Rate

Quality check

High frequency + low failure rate = Elite performance

MTTR

Recovery speed

Fast deployments + fast recovery = Resilient systems

Incidents / Week

Stability indicator

High frequency + low incidents = Stable, mature process

Lead Time for Change

End-to-end speed

Short lead time + high frequency = Efficient delivery

Total Deployments

Absolute count

Shows actual volume vs. normalized frequency

PR Cycle Time

Upstream bottleneck

Slow PRs limit deployment frequency

Common Questions

Q: Our deployment frequency is 2/week - is that good?

A: It depends on context:

  • For SaaS product: Medium performer, room for improvement

  • For regulated industry: Potentially high performer

  • For infrastructure team: Typical and appropriate

More important: Are you improving over time? Is quality maintained (< 15% failure rate)?

Q: Why did our deployments drop suddenly?

A: Common causes:

  1. Major incident → Team being cautious

  2. Holiday period → Reduced activity

  3. Release freeze → Planned slowdown

  4. Integration issue → Deployments not being captured

  5. Team changes → New process or people

Action: Investigate timing, check integration health, review team calendar

Q: Should we aim for multiple deployments per day?

A: Only if:

  • You have robust automated testing

  • You can recover quickly (< 1 hour MTTR)

  • You have feature flags for gradual rollout

  • Your failure rate is < 10%

  • Your team and stakeholders are comfortable

Otherwise: Focus on improving to 1-5 per week first with high quality

Q: How do I get executive buy-in for improving deployment frequency?

A:

  1. Show current state: "We deploy monthly; high performers deploy daily"

  2. Connect to business value: "Faster deployments = faster features to customers"

  3. Highlight risk reduction: "Smaller changes = easier to test and roll back"

  4. Show ROI: "Investment in automation pays off in delivery speed"

  5. Set realistic goals: "Move from 1/month to 1/week in 6 months"

Troubleshooting

If Metric Shows Zero or Very Low

Check:

  • ✓ Integration is active and healthy

  • ✓ Deployments are being captured (check Assets > Deployments)

  • ✓ Environment field contains "prod" or "prd"

  • ✓ Status field is set to "Success"

  • ✓ Time range includes actual deployments

If Metric Seems Too High

Verify:

  • Are non-production deployments being counted?

  • Are failed deployments being marked as success?

  • Are the same deployments being counted multiple times?

  • Is the environment filter working correctly?

If Metric Suddenly Changed

Investigate:

  • Integration configuration changes

  • Environment naming changes

  • Process changes (new CI/CD system)

  • Team composition changes

  • Actual deployment practice changes

Quick Reference Checklist

When analyzing Deployments / Week:

  •  What's our current deployment frequency?

  •  What DORA performance tier are we in?

  •  What's the trend (improving, stable, declining)?

  •  How do we compare to peer organizations?

  •  What's our Change Failure Rate? (must be < 15%)

  •  What's our MTTR? (should be < 1 day)

  •  Are deployments captured accurately?

  •  What's blocking more frequent deployments?

  •  Do we have appropriate goals set?

  •  Are we celebrating improvements?

Summary

The Deployments / Week report is a key DORA metric measuring your organization's deployment velocity and continuous delivery maturity. Use it to track CI/CD improvements, benchmark against industry standards, and ensure your team can deliver value to customers quickly and safely. Always view deployment frequency alongside Change Failure Rate and MTTR to ensure speed doesn't compromise quality or stability.

Healthy Targets:

  • High Performers: 1-7 deployments/week

  • Elite Performers: 10+ deployments/week

  • Critical: Maintain Change Failure Rate < 15%