AI Tool Adoption Report
Last updated: February 9, 2026
Overview
The AI Tool Adoption page helps you understand how AI coding assistants are being adopted and used across your engineering organization. Track adoption trends, measure the impact of AI-generated code, and identify opportunities to maximize the value of your AI investments.
Location: Navigate to AI Transformation → Adoption in the main sidebar.
What This Report Measures
The AI Tool Adoption report provides comprehensive insights into:
Adoption rate: Percentage of developers actively using AI coding assistants
Usage frequency: How often developers use AI tools in their daily work
Code volume: Amount of AI-generated code being accepted and merged
Tool preferences: Which AI coding assistants your team uses most
Adoption trends: Whether AI usage is growing, plateauing, or declining
Team patterns: How adoption varies across different teams and individuals
This report helps you answer critical questions about your AI tool investment: Are developers using the tools? How much impact are they having? Where should you focus adoption efforts?
Report Structure
The AI Tool Adoption report is organized into two main sections:
Adoption
Tracks who's using AI tools and how frequently:
Active user counts (weekly and monthly)
Adoption percentages by team and individual
Usage frequency patterns
Tool-specific adoption rates
What Counts as "Active Use":
Developer generated AI suggestions
Developer accepted AI-generated code
Developer was actively coding (not on vacation/OOO)
Key Filters Applied:
✅ Excludes developers marked as out-of-office
✅ Requires evidence of coding activity during the period
✅ Only includes classified coding contributors
Impact
Shows the effect of AI tool usage on code delivery:
Volume of AI-generated code accepted
Percentage of codebase from AI
Correlation with productivity metrics
Impact summary with key highlights
Supported AI Coding Tools
Span tracks usage data for these AI coding assistants:
Code Generation Tools
GitHub Copilot (Business and Enterprise plans)
Cursor (Pro and Team plans)
Claude Code (Team and Max plans)
Augment Code
Codex (CLI and Cloud)
Gemini CLI
Devin
Async Copilot
Codegen
Core Metrics Explained
1. Active Users (Weekly/Monthly)
Definition: Number of developers who actively used an AI tool within the selected time window.
Measurement Windows:
Weekly Active Users: Used AI tool within the last 7 days
Monthly Active Users: Used AI tool within the last 30 days
Display Formats:
Raw count: "15 developers used AI tools"
Percentage: "42% of active developers used AI tools"
What counts as "active use":
Developer generated AI suggestions
Developer accepted AI-generated code
Developer was actively coding (not on vacation/OOO)
Use this to: Track adoption penetration across your organization.
2. Adoption Rate
Definition: Percentage of your active developer population using AI tools.
Formula: (Developers using AI ÷ Total Active Developers) × 100%
Example:
30 active developers
18 used AI tools this month
Adoption Rate: 60%
Benchmarking:
< 25%: Early adoption phase
25-50%: Growing adoption
50-75%: Majority adoption
> 75%: Mature adoption
Use this to: Set adoption goals and track progress toward them.
3. AI Days Per Week
Definition: Average number of days per week that a developer uses AI coding tools.
Formula: Total AI Usage Days ÷ Active Weeks
Interpretation:
0-1 days/week: Occasional use
2-3 days/week: Regular use
4-5 days/week: Heavy, integrated use
Example:
Developer A: 4.5 AI days/week (using AI most days)
Developer B: 1.2 AI days/week (occasional use)
Use this to: Understand depth of adoption, not just breadth.
4. Accepted Lines of AI Code
Definition: Total volume of AI-generated code that developers accepted into their work.
What's included:
Code suggestions accepted from AI tools
Multi-line completions
Whole function generations
Code across all branches (not just merged code)
What's excluded:
Rejected AI suggestions
Manually written code
Generated code from non-AI tools
Important caveat: This metric includes code from development branches that may not be merged yet, so numbers can be higher than actual merged code.
Use this to: Measure overall AI code generation volume.
5. AI Code Percentage
Definition: What percentage of your total merged codebase comes from AI coding assistants.
Formula: (AI-Accepted Lines ÷ Total Merged Lines) × 100%
Example:
Total merged lines: 50,000
AI-accepted lines: 8,000
AI Code Percentage: 16%
Typical ranges:
< 10%: Limited AI impact
10-25%: Moderate AI contribution
25-40%: Significant AI acceleration
> 40%: Heavy AI reliance
Note: Can appear higher than actual AI-generated code because developers may accept AI code in work branches that gets refactored before merge.
Use this to: Understand AI's contribution to your codebase.
6. Total Active AI Days
Definition: Cumulative count of distinct days each developer used an AI tool.
Calculation: Counts unique calendar days with AI usage activity.
Filters applied:
Only developers (not other roles)
Only during active coding periods
Excludes OOO/vacation days
Use this to: Identify your most consistent AI tool users.
How Data is Collected
Tool Integration Method
Span gathers AI adoption data through direct API integrations with each AI tool provider:
Daily Sync: Span connects to each tool's API daily
Usage Tracking: Logs which developers used which tools
Code Volume: Captures lines of AI-generated code accepted
Activity Filtering: Cross-references with developer active status
Data Aggregation: Combines data across all connected tools
Key Use Cases
1. Track AI Investment ROI
Measure whether your AI tool licenses are being utilized effectively.
Example: "We're paying for 50 GitHub Copilot licenses. Only 32 developers (64%) are actively using them—opportunity to improve adoption or optimize licensing."
2. Executive Reporting & Board Updates
Demonstrate AI adoption progress to leadership and stakeholders.
Example: "AI tool adoption grew from 35% in Q1 to 68% in Q2. AI-generated code now represents 22% of our merged codebase, accelerating feature delivery."
3. Team Benchmarking
Compare AI adoption across teams to identify leaders and laggards.
Example: "Team A has 85% adoption while Team B has only 30%. Let's learn what Team A is doing right and replicate it."
4. Identify Champions & Support Needs
Find developers who are AI power users (to champion adoption) and those who need support.
Example: "Sarah uses AI 4.8 days/week—let's have her share best practices. John hasn't used AI in 6 weeks—let's check if he needs training or has blockers."
5. Tool Selection & Optimization
Compare usage patterns across different AI tools to inform purchasing decisions.
Example: "GitHub Copilot has 70% adoption while Cursor has 15%. Should we consolidate on one tool or maintain both?"
6. Measure Adoption Growth
Track whether adoption is accelerating, plateauing, or declining over time.
Example: "Adoption growth has plateaued at 60% for 3 months. Time to launch a training initiative to reach the remaining 40%."
7. Correlate with Productivity
Analyze whether AI adoption correlates with changes in delivery metrics.
Example: "Teams with >70% AI adoption show 18% higher PR merge volume and 12% faster cycle times than teams with <30% adoption."
8. Ensure Equitable Access
Verify that all team members have access to and are using AI tools.
Example: "Junior developers show 45% adoption vs. 75% for senior developers. Need to ensure juniors have equal access and training."
How It Relates to Other Metrics
AI Tool Adoption works best when analyzed alongside complementary metrics:
AI-Related Metrics
Metric | Relationship | Use Together To... |
AI Code Ratio | Independent validation | Compare tool usage vs. detected AI code |
AI Lines Accepted | Volume measure | Understand both adoption (%) and volume (lines) |
AI Productivity Impact | Outcome measure | Correlate adoption with productivity changes |
Productivity Metrics
Metric | Relationship | Use Together To... |
PRs Merged Per Week | Throughput indicator | Measure if AI increases merge volume |
PR Cycle Time | Speed indicator | Assess if AI reduces time-to-merge |
Lines of Code | Volume indicator | Compare AI vs. total code volume |
Story Points Completed | Value indicator | Determine if AI accelerates delivery |
Team Health Metrics
Metric | Relationship | Use Together To... |
Active Contributors | Denominator for adoption rate | Calculate % of team using AI |
Developer Satisfaction | Qualitative correlation | Assess if AI tools improve experience |
Onboarding Time | Efficiency indicator | Check if AI helps new hires ramp faster |
Powerful Analysis Combinations:
AI Adoption Rate + AI Code % + PRs Per Week
= Complete AI impact picture
Team Adoption Rate + Individual Usage Frequency
= Identify champions and training opportunities
AI Code Volume + Code Quality Metrics
= Ensure AI doesn't sacrifice quality
Insights You Can Gain
Adoption Velocity
How fast is adoption growing? Month-over-month % change
Will we reach target adoption? Project timeline to adoption goals
Are we plateauing? Identify when growth stalls
What drives adoption spikes? Correlate with training or events
Penetration Analysis
What % of developers use AI? Current adoption rate
Who are the non-adopters? Identify teams/individuals not using tools
Is adoption evenly distributed? Compare across teams and levels
Are there access issues? Ensure all developers have licenses
Usage Depth
Casual vs. heavy users? AI days/week distribution
Integrated into daily workflow? High days/week indicates deep integration
Tool stickiness? Are users coming back regularly?
Usage patterns? When and how often do developers use AI?
Tool Preferences
Which tools are most popular? Usage by tool type
Are paid tools worth it? Compare usage vs. cost
Should we consolidate tools? Identify underutilized tools
Tool switching patterns? Are developers trying multiple tools?
Volume Impact
How much code comes from AI? AI code percentage
Is AI volume growing? Trend in AI-accepted lines
Volume vs. adoption alignment? High adoption with low volume may indicate issues
Productivity correlation? Do high AI volumes correlate with higher output?
Organizational Patterns
Which teams lead adoption? Team-by-team comparison
Level-based patterns? Junior vs. senior adoption rates
Geographic differences? Adoption by location
Role-based usage? Frontend vs. backend developer adoption
Common Scenarios & Interpretations
Scenario 1: High Adoption but Low AI Code Volume
What you see: 70% adoption rate but only 8% AI code
Possible causes:
Developers are trying tools but not accepting many suggestions
Heavy editing of AI suggestions before merge
Using AI for exploration/learning, not production code
AI suggestions not meeting quality expectations
Detection model not identifying AI patterns
Actions:
Survey developers about AI tool quality and usefulness
Provide training on effective AI prompt engineering
Review which types of work AI helps with most
Consider if different tools might perform better
Scenario 2: Growing Adoption but Declining Volume
What you see: More developers using AI but less AI code being merged
Possible causes:
Increased scrutiny/editing of AI suggestions
Shift from simple to complex work (less AI-suitable)
Improved developer judgment about when to use AI
Quality controls reducing low-quality AI code
Initial "novelty effect" wearing off
Interpretation: Not necessarily negative—may indicate mature, thoughtful AI usage.
Scenario 3: Plateaued Adoption
What you see: Adoption stuck at 50-60% for several months
Possible causes:
Early adopters maxed out, harder to reach laggards
Lack of training or support for non-users
License constraints limiting access
Some developers skeptical or prefer traditional methods
Tool limitations for certain work types
Actions:
Launch targeted training for non-adopters
Identify and address specific barriers
Have power users mentor non-users
Share success stories and best practices
Ensure licenses available to all who want them
Scenario 4: Uneven Adoption Across Teams
What you see: Team A at 90% adoption, Team B at 25%
Possible causes:
Different tech stacks (AI works better for some)
Different work types (features vs. infrastructure)
Leadership support varies by team
Access or licensing limitations
Cultural differences (experimentation vs. conservatism)
Actions:
Learn from high-adoption teams
Provide team-specific training and support
Address tech stack or tooling barriers
Ensure managers encourage adoption
Create cross-team sharing sessions
Scenario 5: High Volume from Few Users
What you see: 20% of developers account for 80% of AI code
Possible causes:
Power users generating lots of boilerplate/repetitive code
Uneven work distribution (some do AI-suitable tasks)
Skill differences in using AI effectively
Some developers more comfortable with AI
Certain roles/projects more AI-amenable
Actions:
Learn from power users about effective practices
Provide mentorship from heavy to light users
Share specific use cases where AI excels
Recognize power users as champions
Ensure AI-suitable work distributed fairly
Best Practices
1. Set Realistic Adoption Goals
Don't expect 100% adoption immediately:
Typical adoption curve:
Month 1-2: 10-20% (early adopters)
Month 3-6: 30-50% (early majority)
Month 6-12: 60-80% (late majority)
Beyond 12 months: 80-90% (mature adoption)
Healthy target: 70-80% adoption within 12 months of tool availability.
2. Combine Quantitative and Qualitative Data
Metrics tell you "what" but not "why":
✓ Pair adoption metrics with developer surveys
✓ Conduct interviews with high and low adopters
✓ Gather feedback about tool quality and usefulness
✓ Understand barriers to adoption
3. Focus on Meaningful Usage, Not Just Adoption
High adoption with low usage frequency may indicate superficial adoption:
Quality indicators:
AI days/week consistently > 3
AI code volume growing alongside adoption
Positive developer feedback about value
Productivity metrics improving
4. Identify and Empower Champions
Find developers who are AI power users:
Recognize them publicly
Have them share best practices
Create "office hours" or mentoring programs
Document effective AI use patterns
Build a community of practice
5. Address Non-Adopters Thoughtfully
Don't mandate AI usage, but understand barriers:
Common barriers:
Lack of training or confidence
Concerns about code quality
License availability issues
Tool doesn't support their tech stack
Philosophical opposition to AI
Supportive approach:
Provide training and resources
Share success stories
Address specific concerns
Ensure access isn't the issue
Respect informed decisions not to use AI
6. Monitor AI Code Quality
High AI volume doesn't equal high value:
Quality checks:
PR revert rate for AI-heavy PRs
Test coverage in AI-generated code
Review comments on AI code
Bug rates correlated with AI usage
Developer satisfaction with AI quality
7. Track Both Adoption and Impact
Adoption alone doesn't guarantee value:
Balanced scorecard:
✓ Adoption rate (are people using it?)
✓ Usage frequency (how often?)
✓ Code volume (how much?)
✓ Productivity impact (does it help?)
✓ Developer satisfaction (do they like it?)
✓ Code quality (is quality maintained?)
8. Celebrate Milestones
Recognize adoption achievements:
First team to 50% adoption
Organization-wide 25%, 50%, 75% milestones
Individuals who effectively use AI
Teams showing productivity improvements
Innovation in AI usage patterns
9. Continuously Educate
AI tools evolve rapidly—keep training current:
Regular "AI tips" sharing sessions
New feature announcements
Best practice documentation
Use case libraries
Cross-team learning forums
Setting Up AI Tool Adoption Tracking
Requirements
To use this report, ensure you have:
✓ AI coding tool licenses (GitHub Copilot, Cursor, etc.)
✓ AI tool integrations connected in Span
✓ VCS integration active (for code analysis)
✓ Developers assigned to teams in Span
✓ Active contributor classifications configured
Setup Steps
Connect AI Tool Integrations
Navigate to Settings → Integrations
Connect GitHub Copilot, Cursor, or other tools
Authorize Span to access usage data
Verify connection is active and syncing
Verify VCS Integration
Ensure GitHub/GitLab/ADO connected
Confirm PR data syncing properly
Check that code analysis is enabled
Configure Team Structure
Define teams and groups
Assign developers to teams
Set team hierarchies
Establish Baseline
Review current adoption rate
Note which teams/individuals are already using AI
Set realistic adoption goals
Set Up Monitoring
Add AI adoption to leadership dashboards
Schedule regular review cadence (monthly recommended)
Define adoption milestones and targets
Frequently Asked Questions
Q: What's a "good" AI adoption rate?
A: It depends on your goals and timeline:
6 months post-launch: 40-60% is strong
12 months post-launch: 70-80% is excellent
Mature adoption: 80-90% (some developers may choose not to use AI)
Focus on growth trajectory and meaningful usage, not just hitting a number.
Q: Should every developer use AI tools?
A: Not necessarily. Some reasons not to use AI are valid:
Work type doesn't benefit from AI (architecture, design)
Tech stack not well-supported by AI tools
Personal preference for traditional methods
Philosophical concerns about AI-generated code
Aim for broad adoption but respect informed decisions.
Q: Why is AI code percentage higher than I expected?
A: The metric includes code accepted in development branches, which may be heavily refactored before merge. Additionally, developers may use AI as a starting point but significantly edit suggestions.
Q: How do I know if AI is actually helping productivity?
A: Look beyond adoption metrics:
Compare PR merge rates pre/post AI adoption
Check PR cycle time for AI users vs. non-users
Survey developers about perceived value
Monitor story points completed
Track developer satisfaction scores
Q: Should I mandate AI tool usage?
A: Generally no. Forced adoption often leads to superficial usage without real value. Instead:
Provide training and resources
Share success stories
Remove barriers to adoption
Create positive incentives
Let value drive organic adoption
Q: Can I track which specific AI tools are most effective?
A: Yes! Use the tool-specific breakdowns to compare:
Adoption rates by tool
Code volume by tool
Usage frequency by tool
Developer preferences
This helps optimize your tool portfolio.
Q: Why do some developers have high AI days/week but low code volume?
A: Possible reasons:
Using AI for research, documentation, or learning
Generating suggestions but not accepting many
Heavy editing before accepting
Using AI chat features vs. code completion
Working on exploratory or architectural tasks
Check the specific use patterns with those developers.
Q: How does AI detection (Telltale) differ from tool usage tracking?
A:
Tool usage: "Developer used Copilot 4 days this week" (API data)
AI detection: "This code looks AI-generated" (ML analysis)
They're complementary—tool usage shows intent, detection shows actual AI code in the codebase.
Q: What if adoption is growing but productivity isn't improving?
A: This suggests adoption quality issues:
Developers may not be using AI effectively
AI suggestions may be low quality for your domain
Team might need better training on effective usage
Work type may not be AI-suitable
Need more time for developers to develop AI workflows
Survey developers and iterate on training.
Q: Should junior or senior developers use AI more?
A: Both benefit differently:
Juniors: Learn patterns, accelerate common tasks, build confidence
Seniors: Automate boilerplate, focus on architecture, increase throughput
Both should be encouraged to adopt based on their needs.
Need Help?
For additional support with the AI Tool Adoption report:
Visit the Span Help Center
Contact your Customer Success Manager
Email support@span.app
This documentation reflects Span's platform capabilities as of the current version. Features and calculations are subject to updates.