Comments Authored / Week

Last updated: January 7, 2026

Overview

The Comments Authored / Week report measures the average number of PR comments a developer authors per week, normalized by their active working days. This metric provides insights into code review participation, collaboration depth, and team engagement in the review process.

What Does This Metric Measure?

Definition

Comments Authored / Week counts the average number of comments a contributor writes on pull requests per week, excluding comments on their own PRs.

Key Characteristics:

  • Comments on others' PRs only: Self-comments on your own PRs are excluded

  • Per active contributor: Normalized per person for fair comparison

  • Adjusted for time off: Accounts for out-of-office days

  • Weekly basis: Results expressed as comments per week

What It Tells You

This metric indicates:

  •  Review engagement depth — How actively someone participates in code review discussions

  •  Collaboration culture — Level of feedback and knowledge sharing

  •  Mentorship activity — Senior engineers providing guidance through reviews

  •  Review thoroughness — More comments may indicate deeper review engagement

What It Doesn't Tell You

This metric does NOT measure:

  •  Quality of comments — Insightful vs. superficial feedback

  •  Comment length or depth — One-word vs. paragraph explanations

  •  Comment type — Critical issues vs. style nits vs. questions

  •  Review outcomes — Whether reviews led to approvals or changes

  •  Non-written collaboration — Pair programming, verbal feedback, chat discussions

How It's Calculated

Formula

Comments Authored / Week = (Total PR Comments Authored / Total Active Coding Days) × 7

Components Breakdown

1. Numerator: Total PR Comments Authored

  • Counts all comments authored by the contributor on PRs

  • Excludes: Comments on PRs they themselves created

  • Includes: All comment types (suggestions, questions, approvals with comments)

  • Only counts comments made on active days

2. Denominator: Total Active Coding Days

  • A person is "active" on a day if:

    • Had VCS activity in the last 30 days

    • Was NOT marked as out of office

    • Actively employed

  • Only these days count in the calculation

3. Weekly Scaling (× 7)

  • Converts daily average to weekly metric

Example Calculation

Scenario:

  • Person authored 40 comments over 2 weeks

  • They worked 12 days (took 2 days OOO)

Calculation:

(40 comments / 12 active days) × 7 = 23.3 comments per week

What Counts as a Comment?

Included:

  • Review comments (line-level feedback)

  • General PR comments

  • Approval messages with text

  • Change request descriptions

  • Follow-up discussion comments

Excluded:

  • Comments on your own PRs

  • Comments on days you were OOO

  • Comments during periods of inactivity (no commits in last 30 days)

Accessing the Report

Navigation Path

Primary Route:

  1. Login to Span

  2. Under "Insights" in the main navigation Click "Productivity" 

  3. Select "Code Review" tab

  4. Find "Comments Authored / Week" metric

Alternative Access:

  • Team Pages: View each team member's comment activity

  • Individual Pages: See person-specific review engagement

  • Metric Explorer: Search for "comments authored"

Available Filters

Time Filters

  • Last 7 days

  • Last 2 weeks

  • Last 4 weeks (common default)

  • Last 3 months

  • Last 6 months

  • Last 12 months

  • Custom date range

  • Year-to-date

Person & Team Dimensions

Filter

Purpose

Person

View specific individual's comment activity

Team/Group

Aggregate by organizational unit

Job Title/Role

Compare across roles

IC Level

Compare by career level

Job Family

Compare across functions (Engineering, QA, etc.)

Location

Geographic analysis

Tenure

New hires vs. experienced members

Active/Inactive

Include/exclude inactive contributors

Technical Dimensions

Filter

Purpose

Repository

Filter to specific codebases

Repository Group

Analyze by repository collections

Integration Platform

Separate GitHub, GitLab data

Combined Filtering

Stack filters for targeted analysis:

  • Example: "Show Senior Engineers on Backend team, last 3 months"

Interpreting the Data

Benchmark Ranges

Comments/Week

Assessment

Context

10+

Very High

Likely senior reviewer, tech lead, or active mentor

5-10

Strong

Active code review participation; engaged reviewer

3-5

Healthy

Moderate participation; contributor engages in reviews

1-3

Light

Limited participation; may be new, specialized, or focused on authoring

< 1

Minimal

Very light participation; investigate if appropriate for role

Percentile Interpretation

Percentile

Interpretation

Top 10% (90+)

Exceptional review engagement; likely senior mentor or tech lead

60-90%

Above average participation; strong reviewer

40-60%

Average/typical for the team

20-40%

Below average; may be ramping up or specialized role

Bottom 20% (<20)

Limited engagement; investigate if appropriate for role expectations

Context-Based Expectations

Senior Engineers / Tech Leads

  • Expected: 8-15+ comments/week

  • Reason: Mentorship, architectural guidance, gatekeeping responsibilities

Mid-Level Engineers

  • Expected: 4-8 comments/week

  • Reason: Balance between reviewing and implementing; peer reviews

Junior Engineers

  • Expected: 2-5 comments/week

  • Reason: Learning phase; gradually increasing review participation

Specialists / Domain Experts

  • Expected: 3-6 comments/week

  • Reason: Focused reviews on specific areas; fewer but deeper reviews

New Hires (First 3 Months)

  • Expected: 0-3 comments/week

  • Reason: Ramping up; learning codebase before reviewing

Pattern Recognition

Positive Signals:

  • Gradual increase for new hires (shows engagement growth)

  • Consistent values over time (stable contribution pattern)

  • High values from senior engineers (mentorship happening)

  • Balanced distribution across team (shared review responsibility)

Warning Signals:

  • Sharp decrease (role change, disengagement, burnout)

  • Extremely high values (potential review bottleneck, overload)

  • Very low values for senior roles (lack of mentorship)

  • Unbalanced distribution (few people carrying review load)

Use Cases

1. Code Review Load Balancing

Scenario: Some reviewers authoring 20+ comments/week, others < 3 comments/week

Analysis:

  • High values may indicate review overload or gatekeeping

  • Low values may indicate under-utilization or disengagement

Action:

  • Redistribute review assignments for balance

  • Ensure junior engineers participate in reviews

  • Prevent senior engineer burnout from excessive review load

2. Onboarding & Ramp-Up Tracking

Scenario: Track new hire's gradual increase in review participation

Expected Progression:

  • Month 1: 0-2 comments/week (learning, observing)

  • Month 2: 2-5 comments/week (starting to review)

  • Month 3: 4-7 comments/week (actively reviewing)

  • Month 4+: 5-10+ comments/week (fully engaged)

Action: If progression is slower, provide:

  • Code review training

  • Pairing with experienced reviewers

  • Clear review expectations

3. Mentorship & Leadership Assessment

Scenario: Evaluate if senior engineers are actively mentoring

Analysis:

  • Senior engineers should have higher comment counts

  • Comments should correlate with team learning outcomes

  • Compare against peer senior engineers

Action:

  • Recognize active mentors

  • Set mentorship expectations for senior roles

  • Investigate low values (disconnected leadership)

4. Team Collaboration Health

Scenario: Team-wide average dropping from 6 to 3 comments/week over 3 months

Possible Causes:

  • Shift to pair programming (less async review)

  • Process changes (fewer review rounds)

  • Team burnout or disengagement

  • Smaller PRs requiring less discussion

Action: Investigate root cause:

  • Survey team about review process

  • Check if correlated with quality issues

  • Validate if intentional process improvement

5. Process Change Impact Analysis

Scenario: Measure impact of new code review policy

Example:

  • Before: Team average 4 comments/week

  • Implement "detailed review checklist"

  • After: Team average 7 comments/week

  • Result: Policy increased review thoroughness

6. Performance & Promotion Criteria

Scenario: Assess code review contribution for promotion to senior role

Use as one signal (not sole indicator):

  • Senior role candidates should show sustained high values

  • Combine with:

    • Quality of comments (manual review)

    • Review response time

    • PR approval rate

    • Mentorship feedback

Caution: Never use as sole performance metric

Important Considerations & Limitations

Quality vs. Quantity

 This metric measures quantity, NOT quality

High comment count doesn't always mean:

  • Better reviews

  • More valuable feedback

  • Improved code quality

Examples:

  • 2 insightful architectural comments > 10 superficial style nits

  • Thoughtful questions may require fewer comments

  • Experienced reviewers may be more concise

Recommendation: Combine with qualitative assessment

What's Not Captured

Non-written collaboration:

  • Pair programming sessions

  • Verbal feedback in meetings

  • Chat/Slack discussions about code

  • Design review conversations

  • Informal hallway reviews

Comment characteristics:

  • Comment length or depth

  • Whether comment was addressed

  • Comment sentiment (constructive vs. critical)

  • Comment type (blocking vs. suggestion)

Data Dependencies

Requires:

  • VCS integration (GitHub/GitLab) properly synced

  • Calendar integration for OOO tracking

  • Accurate person attributes and team assignments

  • Active contributor status properly configured

Data Timing:

  • Metrics refresh daily (typically 24-hour lag)

  • Historical data always complete

  • Real-time data may have slight delay

Context Considerations

Low values may be perfectly appropriate for:

  • New team members (ramping up)

  • Specialists with narrow review scope

  • Contributors focused on critical features

  • Teams using pair programming heavily

  • People on extended focus/implementation cycles

High values may indicate:

  • Senior mentors (positive)

  • Review bottlenecks (concerning)

  • Gatekeepers blocking progress (problematic)

  • Verbose reviewers (potentially inefficient)

Related Metrics for Complete Picture

View these alongside "Comments Authored / Week":

Metric

What it adds

Combined Interpretation

Unique PRs Reviewed / Week

Breadth of review participation

High comments + low unique PRs = deep reviews on fewer PRs

PR Reviews / Week

Formal review count

High comments + high reviews = very active reviewer

Review Response Time

Speed of review

High comments + fast response = engaged, responsive reviewer

Review Depth

Comments per review

High per-week + low per-review = many shallow reviews

PR Cycle Time

Overall flow speed

High comments + low cycle time = efficient reviews

PR Review Cycles

Review iteration count

High comments + high cycles = detailed but iterative reviews

Best Practices

For Individual Contributors

Balanced Participation:

  • Aim for consistent weekly engagement (not feast-or-famine)

  • Quality over quantity: Focus on meaningful feedback

  • Balance reviewing with implementing

Tips for Effective Commenting:

  • Ask questions to understand intent

  • Suggest alternatives, don't just criticize

  • Recognize good code ("LGTM + nice pattern here!")

  • Be specific: Link to docs, explain why

  • Distinguish blocking issues from suggestions

For Team Leads

Monitor for:

  • Balanced distribution across team members

  • Appropriate ramp-up for new hires

  • Senior engineers actively mentoring (high values expected)

  • No single review bottleneck (one person with 3x team average)

Actions:

  • Set role-based review expectations

  • Provide code review training

  • Recognize thoughtful reviewers

  • Rotate review assignments

  • Create safe environment for junior reviews

For Engineering Managers

Use for:

  • Workload balancing discussions

  • Mentorship effectiveness assessment

  • Team collaboration health checks

  • Onboarding progress tracking

Avoid:

  • Using as sole performance metric

  • Setting arbitrary quotas without context

  • Comparing across vastly different roles

  • Punishing low values without understanding context

Troubleshooting

If Metric Is Unexpectedly Low

Check:

  • ✓ Contributor is not marked OOO

  • ✓ Person has commits in last 30 days (required for "active")

  • ✓ VCS integration is synced properly

  • ✓ Person is included in contributor list (not excluded in settings)

  • ✓ Comments aren't all on own PRs (excluded by design)

If Metric Shows Zero

Possible Causes:

  • No review comments in the period (only observing)

  • All comments were on own PRs (excluded by design)

  • Doesn't meet "active contributor" criteria

  • Recently joined (insufficient data)

If Values Seem Inflated

Verify:

  • Not counting duplicate comments (should be handled)

  • Comments on own PRs are excluded (should be)

  • Bot accounts are filtered out

  • Comment types are appropriately categorized

Configuration Options

Excluding Contributors

Navigate to: Settings → Contributors

Common exclusions:

  • Managers (recommended for IC-focused metrics)

  • Contractors with limited review scope

  • Automated accounts or bots

  • Non-engineering roles

Time Period Selection

Recommended defaults:

  • Recent performance: Last 4 weeks

  • Trend analysis: Last 3-6 months

  • Historical baseline: Last 12 months

  • Quarterly reviews: Quarter-over-quarter comparison

Quick Reference Checklist

When analyzing Comments Authored / Week:

  •  What's the team average? Individual range?

  •  Are values appropriate for each person's role/level?

  •  Is comment activity balanced across the team?

  •  Are new hires progressively increasing participation?

  •  Are senior engineers showing mentorship through comments?

  •  Are any reviewers overloaded (potential bottleneck)?

  •  How does this correlate with review quality/outcomes?

  •  What's the trend over time (improving or declining)?

  •  Does this align with team's code review culture goals?

Summary

The Comments Authored / Week report provides visibility into code review engagement and collaboration depth across your team. Use it to ensure balanced review participation, track mentorship activity, identify onboarding progress, and maintain a healthy code review culture. Remember: this metric measures engagement volume, not quality—always combine with qualitative assessment and related metrics for the complete picture.

Healthy Target: 5-10 comments/week for mid-level engineers; 8-15+ for senior/lead roles; 2-5 for junior engineers (context-dependent).