Understanding differences between AI Impact Scorecard and AI Tool Adoption reports
Last updated: January 29, 2026
You may notice different results when comparing the AI Impact Scorecard and AI Tool Adoption reports in your dashboard. This is expected behavior due to fundamental differences in how these reports collect and analyze data.
Key Differences Between the Reports
AI Impact (Code Detector-Based)
Analyzes actual code within pull requests to determine if they are AI-assisted
Applies qualification criteria to filter PRs before analysis
More focused and concrete in classifying PRs as AI-assisted or not
Provides more precise measurements of AI impact on specific PRs
AI Tool Adoption Report (Usage-Based)
Based on actual usage data from AI tools like Cursor, Copilot APIs, Claude, and Codex
Determines AI-assisted PRs by checking for active AI tool usage in the week prior to PR creation
Uses a more relaxed approach to connecting tool usage with PRs
Includes all language PRs without additional qualification filters
Shows directional changes by focusing on tool utilization rates rather than code analysis
Includes smaller PRs with shorter lifecycles that may be filtered out in other reports
Why Results May Differ Across Reports
The reports may show seemingly different conclusions because:
They use completely different data sources and methodologies
The Tool Adoption report includes a broader range of PRs without qualification criteria
The Impact Report applies stricter filters and focuses on code-level analysis
How to Interpret the Results
Both reports provide valuable but different perspectives:
Use the Impact report when you need precise, code-based analysis of AI impact on qualified PRs
Use the AI Tool Adoption report when you want to understand broader directional trends in AI tool usage and its correlation with development patterns
Rather than viewing these as contradictory, consider them complementary views that together provide a more complete picture of AI adoption and impact in your development workflow.