Understand how your engineering team is adopting and engaging with AI copilots across your stack.
Overview
The AI Adoption dashboard provides a clear, data-driven view of how engineers are interacting with AI copilots such as GitHub Copilot, Cursor, and Claude. It surfaces adoption metrics, engagement trends, and model performance insights — all in one place.
These insights are not available natively in GitHub today. Optimal AI built a custom layer on top of API data to visualize and store AI usage metrics over time, enabling teams to measure the real impact of AI tools in their development workflow.
Summary Metrics
At the top of the dashboard, you’ll find key adoption metrics:
-
Overall Code Acceptance Rate: Percentage of copilot or cursor suggestions accepted by your team.
-
Average Chat Interactions per Day: Number of back-and-forth AI chat sessions (not just auto-suggestions).
-
Top Performing Model by Acceptance: The AI model with the highest code acceptance rate (e.g., Claude-3.7).
-
Highest Acceptance by Language: Programming language with the most accepted AI-generated code (e.g., Python).
-
Average Daily Engagement Rate: Portion of active users engaging with copilots across editors or CLI.
These metrics provide a quick snapshot of how well AI assistants are being integrated into daily engineering work.
Example dashboard:
Copilot User Engagement
The Copilot User Engagement chart tracks both active and engaged copilot users over time.
-
Active Users: Developers who have access to copilot or AI tooling.
-
Engaged Users: Those who actively use the AI assistant during their coding sessions.
This data helps quantify adoption health and engagement depth.
In the example below, on April 23, there were 50 active users and 43 engaged users — an engagement rate of 86%.
The dashboard automatically averages engagement across the selected date range and supports 28-day historical backfill after connection to GitHub.
Code Generation Efficiency
The Code Generation Efficiency panel visualizes total lines of code suggested vs. accepted, allowing you to assess the quality and usefulness of AI-generated code.
You can filter and compare results by:
-
Language (e.g., Go, Java, JavaScript, Python)
-
Model (e.g., Claude-3.7, GPT-4, Gemini-1.5)
-
Editor (e.g., VS Code, Visual Studio)
Hovering over the bars reveals acceptances and suggestions per category, so you can easily compare performance across tools and technologies.
For example, the data may show that Go and JavaScript lead in code acceptances while Python suggestions remain under review.
Filtering and Time Range
The AI Adoption dashboard dynamically updates over any 14-day window you select. Use the date picker to track trends over time or analyze shorter bursts of copilot activity. You can also compare AI adoption across models, editors, or teams to identify where AI is having the biggest impact.
Why It Matters
By surfacing previously hidden metrics, AI Adoption helps engineering leaders:
-
Understand where copilots add value and where they’re under-utilized.
-
Compare model performance objectively across languages and editors.
-
Measure real AI ROI and inform adoption strategies.
-
Encourage consistent, data-driven AI usage across the engineering organization.