PRs Reviewed / Week Report

Last updated: January 28, 2026

Overview

The PR Reviews / Week metric measures how many code reviews each active developer completes per week. It tracks review participation and velocity across your team, automatically accounting for out-of-office time to ensure fair comparisons.

What Does This Metric Measure?

PR Reviews / Week counts the number of completed review submissions per active contributor, normalized to a weekly rate.

Key characteristics:

  • Counts individual review submissions (not unique PRs or individual comments)

  • Displayed to one decimal place (e.g., 3.5 reviews/week)

  • Automatically excludes out-of-office days for fair comparisons

  • Only applies to active developers (those with VCS activity in the last 30 days)

  • Excludes self-reviews

Important: What This Metric Counts

This metric measures review submissions — each time a developer submits a completed review. This is different from:

  • Unique PRs Reviewed / Week: Counts distinct pull requests reviewed (available as a separate metric)

  • Comments Authored / Week: Counts individual review comments (also available separately)

A single PR might receive multiple review submissions from the same person (e.g., initial review, re-review after changes), and each submission is counted separately.

How It's Calculated

The metric uses this formula:

(Total Completed Reviews ÷ Total Active Coding Days) × 7

Example:

  • Developer completes 7 reviews over 5 active workdays

  • Calculation: (7 ÷ 5) × 7 = 9.8 reviews/week

Components:

  • Numerator: Count of completed review submissions by active contributors

  • Denominator: Count of active coding days (days within 30 days of VCS activity, excluding OOO)

  • Weekly multiplier: Result is scaled to a 7-day week for easy interpretation

What Counts as a Review?

A review is counted when:

  • It's a completed review submission (has submission timestamp)

  • The reviewer is an active dev contributor

  • It's NOT a self-review (authors can't review their own PRs)

  • The reviewer is identified in the system

  • The PR comes from a tracked repository and branch

Reviews count regardless of status type (Approved, Changes Requested, Commented, etc.).

Where to Find This Report

Access the PR Reviews / Week metric in multiple locations:

  1. Code Review Dashboard - Primary location with other review metrics

  2. Team Overview Pages - Shows team-level review participation

  3. Individual Developer Pages - Displays personal review activity

  4. Metrics Dashboard - Available for custom dashboard configurations

  5. Comparative Views - Benchmarked against organizational percentiles

What Insights Can You Gain?

1. Review Workload Distribution

Identify how review work is shared across the team:

  • High reviewers (>8/week): Team members carrying significant review load

  • Moderate reviewers (3-8/week): Balanced review participation

  • Low reviewers (<3/week): May need to increase review participation or indicate non-coding focus

Note: Healthy ranges vary by team size, PR complexity, and role.

2. Team Collaboration Patterns

Understand your team's code review culture:

  • Evenly distributed reviews: Healthy, collaborative team with shared ownership

  • Concentrated reviews: Few people doing most reviews may lead to bottlenecks

  • Increasing trend: Growing code review engagement and knowledge sharing

  • Decreasing trend: May indicate review fatigue or shifting priorities

3. Individual Contribution Beyond Authoring

Recognize developers who contribute through code review:

  • High reviews with moderate PRs: Strong reviewer, knowledge sharer, mentor

  • Low reviews with high PRs: Heavy author focus, potential bottleneck creator

  • Balanced both: Well-rounded contributor to team velocity

4. Code Review Bottlenecks

Identify potential review capacity issues:

  • Compare team's review rate with PR creation rate

  • Low reviews relative to PRs opened may indicate review capacity constraints

  • Spikes in reviews might correlate with PR backlog clearing

5. Onboarding & Mentorship Activity

Track mentorship patterns:

  • Senior developers with high review rates often indicate active mentoring

  • New hires increasing review activity shows growing code familiarity

  • Low reviews from experienced developers may indicate they're stretched thin

6. Fair Performance Assessment

Because the metric excludes OOO time:

  • Compare developers fairly regardless of vacation schedules

  • Account for part-time or flexible schedules

  • Normalize for holidays and company-wide time off

  • Focus on actual available working time

7. Team Velocity & Quality Balance

Combine with other metrics to assess health:

  • High reviews + Low review response time = Efficient review process

  • High reviews + High review cycles = Thorough but potentially slow process

  • Low reviews + High PR cycle time = Review bottleneck identified

Best Practices for Using This Metric

 Do:

  • Use alongside related metrics: Combine with Unique PRs Reviewed, Review Response Time, and PR Cycle Time

  • Consider team size: Smaller teams naturally have higher per-person review rates

  • Account for PR complexity: Infrastructure PRs may require fewer, more thorough reviews

  • Look for trends: Changes over time are more meaningful than single values

  • Recognize review contributions: High reviewers are valuable team members enabling others

  • Set team-specific expectations: Different teams and projects have different natural rhythms

 Don't:

  • Use as sole performance metric: Review quality matters more than quantity

  • Compare across different teams: Team dynamics, PR sizes, and complexity vary

  • Ignore context: Low reviews might indicate deep architectural work or incident response

  • Create rigid quotas: Forced reviews can lead to rubber-stamping

  • Punish low values without investigation: Understand the root cause first

  • Forget about review quality: More reviews ≠ better reviews

Understanding Benchmark Ranges

Typical ranges you might observe (context-dependent):

  • >10 reviews/week: Very high review activity, potential review leader or mentor

  • 6-10 reviews/week: Strong, consistent review participation

  • 3-6 reviews/week: Solid review contribution alongside other work

  • 1-3 reviews/week: Light review participation, may need encouragement

  • <1 review/week: Minimal review activity, investigate if intentional

Note: These ranges are illustrative. Your organization's healthy ranges depend on team size, PR frequency, PR complexity, and role distribution.

Related Metrics

Get a complete picture by combining PR Reviews / Week with:

  • Unique PRs Reviewed / Week: Shows how many different PRs you review vs. multiple reviews on same PRs

  • Comments Authored / Week: Indicates review thoroughness and engagement depth

  • Review Response Time: Measures how quickly you respond to review requests

  • PR Cycle Time: Shows if reviews are bottlenecking delivery

  • PRs Merged / Week: Compare review activity to authoring activity

  • PR Review Cycles: Indicates if reviews are effective on first pass

Interpreting Common Patterns

Pattern: High Reviews + Low Unique PRs Reviewed

What it means: Reviewer is doing multiple rounds on the same PRs
Possible causes: Thorough review process, authors needing guidance, or lengthy back-and-forth
Consider: Review Response Time, PR Review Cycles

Pattern: High Reviews + Low Comments

What it means: Quick approvals without detailed feedback
Possible causes: Rubber-stamping, simple PRs, or async feedback channels
Consider: Review quality, PR complexity metrics

Pattern: Sudden Drop in Reviews

What it means: Change in team dynamics or priorities
Possible causes: Project phase shift, increased meeting load, team changes, or focus on own PRs
Action: Investigate with team lead

Pattern: Uneven Distribution Across Team

What it means: Review load not shared equally
Possible causes: Knowledge silos, informal review assignments, or varying role expectations
Action: Consider review rotation, knowledge sharing initiatives

Frequently Asked Questions

Q: Is 5 reviews/week good or bad?
A: It depends on your team context. For a small team (3-5 people) with frequent small PRs, this might be low. For a larger team with complex PRs, this could be healthy. Compare against your organization's percentile benchmarks and team norms.

Q: Should senior developers have higher review rates?
A: Not necessarily. Senior developers might do fewer, more complex reviews (infrastructure, architecture) while junior developers do more frequent feature reviews. Review rate alone doesn't indicate seniority or value.

Q: How does this relate to PR authoring?
A: Both are important contributions. The healthiest teams have developers who both author and review. Compare an individual's PR Reviews/Week with their PRs Merged/Week to understand their contribution balance.

Q: Someone has very high reviews but low PR authoring. Is this a problem?
A: Not inherently. This person might be a tech lead, architect, or mentor whose primary value is guiding others. However, ensure they maintain coding skills and don't become a review bottleneck.

Q: How can we increase team review rates?
A: First, understand why they're low: Review fatigue? Too many large PRs? Lack of context? Solutions might include: smaller PRs, clearer PR descriptions, review rotation, dedicated review time blocks, or recognizing review contributions.

Q: Does this metric count draft PR reviews?
A: Only completed review submissions with timestamps are counted. Early feedback on drafts may not appear unless formally submitted as a review.

Q: How does OOO time affect this metric?
A: OOO days are excluded from the calculation, so vacation, holidays, and sick time won't penalize the metric. This ensures fair comparisons across developers with different schedules.

Q: A developer has many reviews on one PR. Do they all count?
A: Yes. Each review submission counts separately. If you want to see how many different PRs someone reviewed, check the "Unique PRs Reviewed / Week" metric instead.


Tips for Healthy Code Review Culture

Use this metric to support, not dictate, healthy practices:

  1. Recognize reviewers: Celebrate high-quality review contributions publicly

  2. Rotate reviews: Prevent knowledge silos and reviewer burnout

  3. Set response expectations: Define team norms for review response time

  4. Keep PRs small: Smaller PRs are easier to review, increasing review throughput

  5. Block review time: Schedule dedicated time for reviews to prevent deprioritization

  6. Provide context: Well-described PRs get faster, better reviews

  7. Balance the load: Monitor for uneven distribution and adjust processes


Need Help?

If you have questions about interpreting your PR Reviews / Week data or want to discuss patterns you're seeing, please reach out to your Span Customer Success team.