Comments Received / PR Report

Last updated: February 4, 2026

Overview

The Comments Received Per PR report measures code review intensity by showing the average number of comments developers receive on their pull requests. This is a key indicator of how much feedback and scrutiny code undergoes during the review process.

What This Report Shows

Primary Metric:

  • Comments Received / PR - Displays as a single decimal number (e.g., 3.2 comments per PR)

  • Shows the average number of review comments each merged PR receives

  • Includes percentile benchmarking showing where a developer ranks within the organization

Underlying Data:

  • Total PR Comments Received

  • Total Merged PRs

  • Calculated as: Total Comments Received ÷ Total Merged PRs


How to Access This Report

  1. Navigate to Insights > Productivity > Quality

  2. In the Code Review tab, find the "Comments Received / PR" metric card

How the Metric is Calculated

What Counts as a Comment:

  • Structured code review comments

  • Inline comments on code changes

  • Comments from all reviewers

  • Discussion threads on PRs

What's Excluded:

  • Comments by the PR author on their own PR

  • Comments from ignored users

  • Comments during reviewer out-of-office periods

  • Comments on unmerged PRs (only merged PRs are included)


Key Insights from This Metric

Code Quality & Scrutiny

Higher Values (More Comments):

  • Thorough code review process

  • Complex code changes requiring discussion

  • Strong learning and improvement culture

  • May indicate training opportunities if significantly above team norms

Lower Values (Fewer Comments):

  • Well-established coding patterns

  • Simpler, straightforward changes

  • Efficient review with high developer trust

  • Potentially insufficient review rigor (context-dependent)

Process Health Indicators

  • High comments + slow cycle times → Review delays or bottlenecks

  • Extreme outliers → Investigate code complexity, experience level, or process issues

  • Team variance → Inconsistent review standards or varying code complexity


Interpreting the Numbers

Value Range

Interpretation

0.5 - 1.5

Low comment rate - straightforward changes or lightweight review

2.0 - 4.0

Moderate rate - healthy, active feedback process

4.5 - 7.0

High rate - substantial feedback, complexity, or rigorous standards

7.0+

Very high rate - significant discussion, learning curves, or complexity

Percentile Rankings:

  • 75th-90th percentile: Receiving more comments than most peers

  • 10th-25th percentile: Receiving fewer comments than peers

Trend Analysis:

  • Increasing trend: More scrutiny or increasing complexity

  • Decreasing trend: Improving quality or increasing efficiency

  • Stable baseline: Consistent review standards


Related Metrics to Review Together

Review Collaboration:

  • Comments Authored / Week (feedback you're giving)

  • PR Review Cycles (back-and-forth iterations)

  • PRs Merged Without Approval %

Delivery & Quality:

  • PRs Merged / Week (volume)

  • PR Revert Rate (defect detection)

  • Total PR Comments Received (absolute volume)

Timing Metrics:

  • Time to Review

  • Total PR Cycle Time

  • Time to Merge

Content Analysis:

  • Code Review Themes (Functionality, Structure, Style)


When to Investigate

Watch for anomalies during:

  • Changes to critical or security-sensitive systems (expect increase)

  • New team member onboarding (temporary increase)

  • New review tools or standards introduction

  • Significant PR size changes

  • High-velocity shipping periods


Quick Reference by Role

Role

Key Use Case

Individual Developer

Compare your feedback rate to team norms

Team Lead

Monitor review consistency and identify bottlenecks

Engineering Manager

Assess review culture health vs. cycle time impact

Organization

Track review rigor against quality and delivery goals


Best Practices

 DO: Interpret this metric alongside cycle time and quality metrics
 DO: Consider developer experience levels when assessing values
 DO: Look for trends over time rather than point-in-time values
 DO: Use percentile rankings for meaningful peer comparisons

 DON'T: Judge quality by this metric alone
 DON'T: Assume higher is always better (or lower is always worse)
 DON'T: Compare across teams with different complexity levels