Issue Throughput Report

Last updated: January 27, 2026

Overview

The Issue Throughput metric (displayed as "Issues completed / week") measures how many issues your team completes per week per active contributor. This metric provides a clear, normalized view of team velocity that accounts for time off and gives you an accurate productivity baseline for planning and forecasting.

At a glance:

  • What it measures: Number of issues completed per week, normalized by active contributor days

  • Why it matters: Provides fair velocity comparison and enables accurate capacity planning

  • Metric type: Velocity indicator (higher generally indicates faster throughput)

  • Format: Decimal (e.g., 4.3 issues/week)


How It's Calculated

Issues Completed / Week = (Done Issues / Active Days) × 7

Components:

  • Done Issues: Count of all issues with status = "Done" during the period

  • Active Days: Total number of days worked by all team members, excluding:

    • Out-of-office (OOO) days

    • Weekends

    • Days when team members had no activity

    • Inactive or excluded contributors

Why Normalization Matters:

Raw issue counts can be misleading. Consider this scenario:

  • Week 1: Team completes 20 issues with full 5-person team (5 workdays = 25 person-days)

  • Week 2: Team completes 15 issues but 2 people were on vacation (3 people × 5 days = 15 person-days)

Without normalization, Week 2 looks worse (15 vs. 20 issues). With normalization:

  • Week 1: 20 issues ÷ 25 days × 7 = 5.6 issues/week

  • Week 2: 15 issues ÷ 15 days × 7 = 7.0 issues/week

Week 2 was actually more productive per person! This fair comparison makes throughput a reliable planning tool.


Finding the Report

Navigation:

  1. Go to Productivity in the main sidebar

  2. Select Velocity

  3. Choose Issue Tracking or Issue Lifecycle

Requirements:

  • Your organization must have an integrated project management tool (Jira, Linear, GitHub Issues, Azure DevOps, etc.)

  • You need appropriate read permissions for issue tracking reports


Available Filters & Customization

Filter Dimensions

People:

  • Individual team member

  • IC Level

  • Job Title & Job Family

  • Location

  • Tenure

  • Active Status

Teams:

  • Team or group hierarchy

  • Organization paths

Work Type:

  • Issue Type (Story, Task, Bug, Sub-task)

Breakdown Options

Analyze throughput by:

  • Individual people: See per-person velocity

  • Teams/groups: Compare team performance

  • Issue type: Understand if certain work types are faster/slower

  • Time series: Track velocity trends over weeks, months, or quarters

Time Range Controls

  • Custom date ranges: Select any start and end date

  • Period comparison: Compare current period vs. previous period

  • Multiple granularities: View by day, week, month, or quarter

Display Options

Toggle between two views:

  • Global Statuses: Standard view showing To Do → In Progress → Done

  • Lifecycle Stages: Custom workflow stages (if configured in your organization)

Choose from multiple visualization formats:

  • Line charts (trends over time)

  • Scatter plots (distribution analysis)

  • Data tables (detailed breakdowns)


Key Use Cases

1. Capacity Planning & Forecasting

Use historical throughput to predict delivery timelines:

  • Example: Your team averages 5.2 issues/week. You have 52 issues in the backlog.

  • Forecast: 52 ÷ 5.2 = ~10 weeks to complete

Best for:

  • Estimating project completion dates

  • Setting realistic sprint or iteration goals

  • Answering "when will this be done?" questions with data

2. Resource Allocation & Staffing

Understand team capacity to inform staffing decisions:

  • Compare throughput across teams to identify under-resourced groups

  • Track throughput changes after adding/removing team members

  • Identify when hiring is needed based on declining velocity under increased load

3. Organizational Performance Benchmarking

The metric includes percentile rankings to compare against peer organizations:

  • See where your team ranks (e.g., 75th percentile = faster than 75% of peers)

  • Identify high-performing teams within your organization

  • Set targets based on realistic benchmarks

4. Process Improvement Tracking

Measure the impact of workflow changes:

  • Establish baseline throughput before changes

  • Track velocity after implementing new tools, processes, or methodologies

  • Quantify ROI of improvement initiatives with concrete velocity gains

5. Sprint Planning & Goal Setting

Set achievable goals based on actual velocity:

  • Use average throughput to determine realistic sprint commitments

  • Avoid over-committing by understanding historical capacity

  • Track whether team consistently meets planned velocity

6. Bottleneck Identification

Combine throughput with cycle time metrics to diagnose issues:

  • High cycle time + low throughput = Process bottlenecks

  • Normal cycle time + low throughput = Capacity constraints

  • Declining throughput over time = Team health or process degradation


Understanding the Data

Throughput by Issue Type

The report provides separate throughput metrics for each issue type:

  • Stories completed / week: Feature work velocity

  • Tasks completed / week: General task velocity

  • Bugs completed / week: Bug fix velocity

  • Sub-tasks completed / week: Granular work item velocity

Why this matters: Teams often have very different throughput for different work types. A team might complete 8 tasks/week but only 2 stories/week because stories are larger. Breaking down by type gives you more accurate forecasts.

What Counts as "Active"?

A contributor is considered active on a day if they:

  • Are currently employed

  • Had commit activity within the last 30 days

  • Are not marked as out-of-office

  • Are not excluded from contributor lists in settings

This ensures throughput reflects actual available capacity, not headcount.


Related Metrics

Use these complementary metrics for deeper insights:

Metric

What It Shows

How It Relates to Throughput

Issue Completion Rate

Percentage of all issues that are Done

Throughput = speed; Completion Rate = progress. High throughput with low completion rate means you're creating issues faster than finishing them.

Estimate Throughput

Story points completed per week

Alternative to count-based throughput when issues have size estimates. Better for heterogeneous work.

Issue Cycle Time

Average time each issue takes to complete

Inversely related: faster cycle time → higher throughput. Use together to understand if speed comes from process efficiency or capacity.

Time in To Do

How long issues wait before work starts

High time in To Do with low throughput = prioritization or planning bottleneck.

Time in In Progress

How long active work takes

High time in In Progress with low throughput = execution or technical bottleneck.

Total Issues Done

Raw count of completed issues

Shows absolute volume (not normalized). Useful for organizational reporting.


Throughput vs. Completion Rate: What's the Difference?

Many users ask about the difference between these two metrics. Here's a clear comparison:

Scenario

Throughput

Completion Rate

Interpretation

Team A

8 issues/week

60%

Fast velocity but creating issues faster than completing → growing backlog

Team B

3 issues/week

90%

Slower velocity but nearly everything gets done → healthy flow

Team C

10 issues/week

85%

Fast velocity AND most work completes → high-performing team

Team D

2 issues/week

40%

Slow velocity AND low completion → serious workflow problems

Use both together:

  • Throughput tells you about capacity and speed

  • Completion Rate tells you about workflow health and focus


Interpreting Your Results

Healthy Patterns

Stable or increasing throughput combined with:

  • Consistent velocity week-over-week (low variance)

  • Reasonable cycle times

  • High completion rate (>70%)

Indicates: Predictable delivery, healthy processes, sustainable pace

Warning Signs

Declining throughput may indicate:

  • Process bottlenecks: Check Time in To Do and Time in In Progress to find where issues stall

  • Increasing complexity: If cycle time is also rising, work may be getting harder

  • Team capacity issues: Check if team size decreased or OOO increased

  • Technical debt accumulation: Slowing down due to code quality issues

Very high throughput may indicate:

  • Closing issues without proper workflow tracking

  • Issues that are too small or trivial (inflated velocity)

  • Cherry-picking easy issues while avoiding complex work

  • Not capturing all work in the issue tracker

Highly variable throughput (erratic week-to-week) may indicate:

  • Inconsistent prioritization

  • External interruptions or context switching

  • Sprint planning issues

  • Measurement problems (issues not updated promptly)


Best Practices

For Individual Contributors

  1. Update issues promptly: Move issues to "Done" when complete so throughput reflects reality

  2. Break down large issues: Consistently sized issues make throughput more predictable

  3. Track all work: Don't leave work untracked—it skews throughput and planning

For Team Leads

  1. Establish a baseline: Track throughput for 4-8 weeks before using it for planning

  2. Account for variance: Use average throughput with a confidence buffer (e.g., 80% of average)

  3. Monitor trends: Focus on 4-week rolling averages to smooth out weekly noise

  4. Investigate changes: If throughput drops >20% for 2+ weeks, investigate immediately

  5. Compare appropriately: Only compare teams doing similar work with similar issue sizing

For Engineering Managers

  1. Benchmark regularly: Review percentile rankings quarterly to understand relative performance

  2. Track type-specific metrics: Use bug throughput separately from story throughput

  3. Correlate with quality: High throughput means nothing if quality suffers—check bug rates and incident metrics

  4. Don't weaponize metrics: Throughput measures capacity, not individual performance reviews

  5. Adjust for context: New teams, major refactors, and learning curves affect throughput temporarily

For Leadership

  1. Use for portfolio planning: Aggregate team throughput to understand organizational capacity

  2. Identify staffing needs: Persistent low throughput relative to demand signals need for hiring

  3. Measure transformation impact: Track throughput before/after major process or tool changes

  4. Set realistic goals: Share throughput data with stakeholders to align expectations on delivery timelines


Common Questions

Q: Is higher throughput always better?
A: Not necessarily. Very high throughput might indicate trivial issues or poor quality. Balance throughput with cycle time, completion rate, and quality metrics.

Q: How does throughput differ from velocity?
A: Throughput is continuous measurement across any time period; velocity typically refers to story points completed per sprint in Scrum contexts. Throughput is count-based; velocity is often size-weighted.

Q: What if my team doesn't use story points?
A: Perfect! Issue throughput is count-based and doesn't require estimates. If you do use story points, also review Estimate Throughput for size-weighted velocity.

Q: Why is my throughput different from my team's perception?
A: Common causes: (1) Not all work is tracked in issues, (2) Issues aren't updated promptly, (3) Issue size varies significantly (some weeks have bigger issues), (4) OOO normalization adjusts perceived counts.

Q: Should I set throughput targets for teams?
A: Use with caution. Targets can drive gaming behavior (closing trivial issues, avoiding complex work). Better: track trends and investigate significant changes.

Q: How often should I check this metric?
A: Weekly for team leads (to catch issues early), monthly for managers (to track trends), quarterly for leadership (strategic planning).

Q: What throughput should my team aim for?
A: This varies enormously by work type, team size, issue granularity, and industry. Use Span's percentile benchmarks to see peer comparisons, then establish your own baseline and track improvements.


Troubleshooting

"My throughput seems too low"

  • Check: Are all completed issues being marked as Done?

  • Check: Are issues sized consistently? Very large issues naturally reduce throughput.

  • Check: Does your backlog include obsolete or blocked issues inflating the denominator?

  • Action: Review cycle time to see if issues are taking too long vs. not being updated

"My throughput varies wildly week to week"

  • Check: Are team members updating issues consistently?

  • Check: Do you have irregular sprint patterns or frequent interruptions?

  • Action: Look at 4-week rolling averages to smooth volatility

"Throughput doesn't match team perception"

  • Check: Is all work tracked in issues (or is some work invisible)?

  • Check: Are OOO days properly recorded in your system?

  • Action: Survey the team about untracked work and improve issue hygiene

"Throughput dropped suddenly"

  • Check: Did team size decrease or OOO increase?

  • Check: Did work complexity increase (check cycle time)?

  • Check: Are there new bottlenecks (check Time in To Do / In Progress)?

  • Action: Investigate immediately—sustained drops signal real problems


Next Steps

  1. Establish your baseline: Track throughput for 4-8 weeks to understand your team's typical velocity

  2. Review related metrics: Check Issue Cycle Time and Completion Rate alongside throughput

  3. Set up monitoring: Create a dashboard or regular cadence to review throughput trends

  4. Use for planning: Start incorporating throughput into sprint planning and delivery forecasts

  5. Iterate and improve: Use insights from throughput analysis to identify and address bottlenecks


Need Help?

If you have questions about interpreting your Issue Throughput metrics or want guidance on improving team velocity, reach out to your Span customer success manager.