Home Engineering Productivity Insights
⚙️

Engineering Productivity Insights

Insights — Full visibility into your engineering workflow, from code to delivery.
By Myra Magpantay
7 articles

⚙️ Engineering Productivity Insights

💡 What is Insights? Insights is Optimal AI’s productivity analytics layer. It gives engineering leaders a unified, data-driven view of team performance, bottlenecks, and delivery efficiency — directly connected to their GitHub and Jira activity. By surfacing cycle times, code review velocity, allocations, and goals, Insights transforms scattered development data into meaningful metrics that drive better decisions and higher performance. 🚧 The Problem Engineering productivity has always been hard to quantify. Teams rely on anecdotal updates or manual spreadsheets, and leaders struggle to identify what’s slowing projects down. Traditional tools focus on tasks — not the flow of engineering work. Without proper visibility, it’s difficult to: - Spot delivery bottlenecks - Measure review and merge velocity - Track time in status or cycle times - Balance team workloads Insights solves this by creating clarity out of complexity — automatically. 🚀 The Solution Insights integrates directly with your repositories and project management tools to build a real-time performance dashboard for your engineering organization. It combines DORA metrics, AI-based analysis, and custom productivity views so you can see exactly how your teams are working — and where to improve. With Insights, you can: - Track velocity and throughput across teams - Measure review and merge times - Identify where PRs or tasks get stuck - Align engineering goals with company OKRs - Drive data-backed process improvements 🔑 Key Capabilities 📈 Cycle Time Tracking Insights visualizes the complete development lifecycle — from first commit to deploy — highlighting delays between coding, review, and deployment. It helps you pinpoint stages where work slows down, whether that’s during PR reviews or QA, and provides recommendations to optimize your delivery pipeline. 🎯 Allocations/Distributions Understand how your team’s time is distributed. With Allocations and Goals, you can define strategic focus areas (e.g., “60% feature work, 20% tech debt”) and measure actual time spent based on GitHub and Jira activity. This ensures your engineering effort is aligned with business priorities, not lost in reactive work. 💬 Review Load & Velocity Review health is a key indicator of productivity. Insights tracks how many PRs are being reviewed, who’s performing reviews, and average time to merge. You can quickly see if certain reviewers are overloaded or if feedback loops are too long — allowing you to rebalance and maintain flow. 🧩 GitHub & Jira Integrations Insights connects seamlessly with your existing tools — no manual setup required. It continuously syncs data from your repositories and project management systems, transforming raw activity into actionable insights. All integrations are designed with zero data retention and secure API access, ensuring privacy while maintaining visibility. 🤖 AI-Powered Trend Detection Beyond static charts, Insights uses AI to detect patterns and anomalies — such as sudden dips in engagement, PR spikes, or delivery delays. It surfaces these as proactive notifications, helping leaders make adjustments before problems grow. Quick Overview: - PR Cycle Time – Measure how fast code moves from first commit → merge - Activity – Understand team workload distribution - PR Tables – Spot large PRs, rework %, failed PRs - AI Insight – Summarize key trends automatically Recommended Next Step: - Getting Started - Trial Setup

Last updated on Nov 03, 2025

Setting Up Jira Integration in Insights

Connecting Jira with Insights allows you to track engineering productivity, including cycle times, issue progress, and delivery metrics, all in one place. This guide walks you through the full setup process. 1. Navigate to Settings To begin, click the Settings icon in the bottom left of your sidebar. 2. Add Team Members Inside the Settings page, navigate to the Members tab. - Click Add a new member to add the engineers whose cycle times and issue data you want to track. - Once members are added, you’ll be able to link them with their Jira accounts later in the process. 3. Open Jira Integration Next, go to the Integrations tab and select Jira Integration. 4. Enter Jira Connection Details Fill in the required fields for the Jira integration: - Domain Name – Use the domain of your Jira Cloud or Jira Server instance (e.g., https://yourcompany.atlassian.net/). - Service Account – (Recommended) Create a dedicated service account in Jira with access to the projects you want Insights to analyze. - Alternatively, you may use a personal access token linked to your Jira user account. - API Token – Generate and paste the API token associated with your service account or personal account. 💡 Tip: Using a service account is recommended for reliability and centralized control. 5. Configure Import & Webhooks Enable the following options for a complete setup: - Select to Import users from Jira → Ensures your Jira users are synced into Insights. - Select to automatically integrate with Jira webhooks → Allows Insights to update metrics automatically as activity occurs in Jira. Alternatively, you can copy and paste the webhook link manually into Jira if you prefer. 6. Save Your Integration Once all details are entered, click Save. - The initial connection may take a few minutes while Jira validates the service account or access token. - Once complete, Insights will begin importing data from your Jira projects. 7. Link Users to Jira After setup, return to the Members tab to ensure users are correctly mapped to their Jira accounts. - Click the Jira icon beside each user. - Link the Insights member profile with the correct Jira user account. This ensures that productivity and delivery metrics are accurately attributed to each engineer. ✅ Jira Integration Complete Your Insights dashboard is now connected to Jira. You’ll start seeing data populate in areas like PR Cycle Time, Story Points, and Time in Status, giving you visibility into your team’s delivery flow.

Last updated on Oct 08, 2025

Distributions

The Distributions view can show meaningful data right away if your team uses GitHub labels consistently. When set up correctly, it breaks down where engineering effort is going like what percentage of work is focused on tech debt, security, new features, or bugs. To clarify: Teams need to manually create these labels (e.g., “tech debt,” “security,” “new feature,” “bug,” etc.) within their GitHub repositories. Once PRs or issues are labeled accordingly, the Distributions view will automatically categorize and visualize that data. Where to find it Go to Allocations → Distributions in the left sidebar This page gives you a single view of how your team’s work is distributed over time. How it works The Distributions page pulls data from all PRs or issues within your selected date range. It groups them by label or type to calculate how much work falls into each category. Each colored section in the chart represents a different type of work: - Bugs: fixes and patches - Tech Debt: refactoring or cleanup - Security: vulnerability and dependency fixes - Enhancements: improvements or new functionality You can hover over each segment to see the exact percentage and number of PRs associated with that label. Date range Choose the time period you want to analyze — for example, a sprint or week range. All metrics and charts will refresh automatically as you adjust the dates. Repository filter Select one or multiple repositories to narrow your view. This helps you isolate work across different codebases or projects. User filter Filter results by a specific team member to see where their work was allocated. This view is helpful for understanding workload balance or individual focus areas. Toggle chart type Switch between two visualization styles: - Donut Chart: shows percentage breakdowns by category - Bar Chart: shows changes in distribution over time Use whichever view best fits your reporting needs. GitHub / Jira toggle Switch between GitHub and Jira to control the data source. - GitHub: categorizes work by pull request labels - Jira: categorizes by issue types Interpreting the data Each slice of the chart represents a category of work. For example: - A large Bugs section means the team spent most of their time fixing issues. - A higher Tech Debt share could mean time went into refactoring or cleanup. - A noticeable Security section shows patches, audits, or dependency updates. - More New Features indicates progress toward roadmap or sprint goals. These distributions help you see whether the team’s focus matches expectations. Pull Request Table When you scroll below the chart, you’ll see a detailed list of pull requests that make up the data shown in your distribution. This table gives you a deeper look at what kind of work happened, where, and by whom. Each row represents a single PR that was opened or merged during the selected date range. Here’s what each column means: Label Name Shows which category the pull request belongs to, such as Bug, Enhancement, or Release. These labels come directly from GitHub (or issue types in Jira). If a PR has multiple labels, it contributes to multiple categories in your chart. Title Displays the name of the pull request exactly as written in GitHub. This helps you understand the purpose of the change — for example, “Fix grading review comments” or “Update migration script.” Repository Indicates which repository the PR originated from. Useful for teams managing multiple codebases, as it shows where work is concentrated. Author Identifies who opened or merged the pull request. Each entry includes the contributor’s avatar for quick recognition, so you can easily track individual activity. PR Size Shows how many lines of code were added (+) and removed (–) in the pull request. This helps you spot large changes that may need extra review attention or smaller fixes that were merged quickly. How to interact with this section - You can sort by any column (e.g., Label Name, Author, or PR Size) to reorder the list and highlight specific trends. - The search bar lets you quickly find a PR by title or keyword. - Use the pagination controls at the bottom (Previous / Next) to navigate across pages of results if your timeframe includes many PRs. Why it matters This section connects the high-level chart data to the actual engineering work behind it. While the chart shows how much time went into each type of work, this list shows exactly which pull requests drove those numbers. You can use it to: - Review work patterns across team members and repositories. - Identify large PRs that might need a follow-up or review. - Verify that all PRs are labeled correctly, so your distribution data stays accurate. - Understand what kind of work dominated the week — bug fixes, refactors, or new features. Troubleshooting - Percentages don’t add up neatly → Normal. A single PR can have multiple labels. - No data showing → Expand your date range or confirm repos are selected. - Chart not switching → Refresh your browser and check the toggle. - Missing PR sizes → Make sure your GitHub token has access to read diffs. - Too many “Unlabeled” items → Check if your team is consistently applying labels or issue types.

Last updated on Oct 28, 2025

Activity View | Track code and review workload distribution

The Activity view helps teams understand how PRs and commits are distributed across contributors. Use it to spot bottlenecks, review load imbalance, or uneven code activity. Where to Find It You’ll find Activity listed under Velocity, alongside PR Cycle Time and Deployment Frequency. How it Works Select a Team or Individual - Click the team selector dropdown to choose a Team (e.g., Engineering Core, Backend) or switch to Individuals to pick specific engineers. - You can multi-select teammates for side-by-side comparison. *The visualization updates instantly based on your selection. Adjust the Time Range - Use the calendar picker (top-right) to select a date range. - Toggle between Week and Month to zoom in or out on activity trends. - Use the zoom slider (– / +) to adjust how much data you see on the timeline. 3. Interpret the Visualization Each bubble represents a specific type of activity: - ⚪ Commit – Code pushed to the repository - 🔵 PR Open – New pull request created - 🟡 PR Review – Pull request reviewed - 🟢 Merge Commit – Pull request merged into the main branch - 🔴 Comment – Review comment added on a pull request Bubble size = amount of activity. Hover over a bubble to see detailed counts (e.g., 6 PRs opened, 7 merged). Filter by Activity Type - At the top of the chart, you can toggle on/off activity categories: Commit, PR Review, PR Open, Merge Commit, and Comment. - Deselecting a type hides it from the chart — helping you focus on what matters most. *Turn off “Comments” to focus purely on delivery work (commits and merges). Generate AI Insights Click AI Insight to get an instant summary of team performance — including busiest contributors, review patterns, and collaboration trends for the selected period. ProTips - Use filters to isolate specific actions (e.g., only PR reviews). - Pair Activity with PR Cycle Time for deeper insight into how reviews affect speed. - Click AI Insight regularly to surface patterns you might overlook manually. - Month view is ideal for spotting long-term productivity trends or comparing sprint outputs. Troubleshooting - No bubbles appearing: Check your date range or filters — you may have deselected all activity types. - Missing members: Switch from Teams to Individuals and search by name. - Slow loading: For large teams, narrow the range to a single week for faster rendering. - Overlapping bubbles: Zoom in using the slider for clearer visualization.

Last updated on Oct 29, 2025

PR Cycle Time | Measure how fast code moves from first commit to merge

The PR Cycle Time page gives teams a clear view of how long it takes code to move from first commit to merge. It’s the fastest way to identify where review bottlenecks occur and how efficiently work flows through GitHub. Insights automatically calculates each stage — from when coding starts to when a pull request (PR) is merged — so you can understand your team’s delivery velocity and quality patterns. Where to find it Go to Velocity → PR Cycle Time in the left sidebar How it works Once you’ve connected GitHub (and optionally Jira), PR Cycle Time begins tracking all merged and open pull requests across repositories. Each PR is broken into measurable stages: - Time to Open — time between the earliest commit in a branch and when the PR was opened. - In Review — duration between the PR opening and receiving the first review. - Time to Merge — time between PR open and PR merge. - Merged to Staging — optional filter to measure PRs merged into your selected staging branches only. These metrics combine to calculate your average cycle time, shown in hours or days depending on the scope. Filters and Controls You can refine your data view using several filters: Teams / Individuals Selector Use this dropdown to toggle between organization, team, or individual views. - Teams tab: Aggregates data across all selected teams. Ideal for sprint retros or comparing backend vs frontend performance. - Individuals tab: Filters metrics to specific engineers for 1-on-1 review or performance insights. - Multi-select supported: Combine multiple users or teams in a single view. 💡 Pro Tip: Use the Teams view for management reviews, and Individuals to identify workload imbalances or top reviewers. Repositories The All Repositories dropdown narrows your metrics to one or more connected GitHub repositories. - Supports multi-selection for cross-repo tracking. - Each repo’s PRs, checks, and reviews are automatically merged into the same dataset. - Great for organizations running microservices or separate frontend/backend repos. Date Range Picker Controls which time window the metrics cover. - Choose quick presets like Last 7 Days, Last 14 Days, Last 30 Days, or Last 90 Days. - Use the calendar picker to set custom start and end dates. - All metrics — Time to Open, In Review, Time to Merge — recalculate dynamically for the selected period. 💡 Pro Tip: Use weekly ranges for sprint retros, and monthly or quarterly windows to track long-term team efficiency. Branch / Staging Filter The Branch time to merge policy modal defines which branches count as staging or deployment targets. This directly powers the Merged to Staging metric on the PR Cycle Time dashboard. Purpose: Track how long it takes code to reach key environments (e.g., staging, release, production) across multiple repositories. How it works: 1. Click the ➕ icon beside Merged to Staging on the dashboard. 2. The Branch time to merge policy modal opens. 3. Under Label, name your policy (e.g., “Staging,” “Production,” or “Feature”). 4. Under Select branch(es), pick one or more GitHub branches to include. 5. Under Select Team(s), optionally scope this to specific teams. 6. Click Save — your label becomes available as a selectable filter in PR Cycle Time. Once configured, the Merged to Staging card shows cycle-time data only for PRs merged into those defined branches. 💡 Pro Tip: You can create multiple merge policies (for example, staging and main) to compare deployment speeds between environments. AI Insight Click AI Insight to open a contextual analysis sidebar powered by Optimal’s LLM agent. - Provides an auto-generated TL;DR summary of weekly or monthly trends. - Highlights week-over-week changes in Time to Open, In Review, and Time to Merge. - Surfaces activity trends, rework rates, PR sizes, and reviewer patterns automatically. - Ideal for leadership stand-ups and sprint retrospectives. Search & Sorting Controls (Table View) Below the main metric cards, the PR Table includes additional filtering and sorting tools: - Search Pull Requests — quickly locate a specific PR or issue ID. - Sort by columns: Reworks, Check Failure Rate, PR Size, or Comment Count. - Quick-highlight buttons: - Longest review time – helps spot slow or stuck PRs. - Most discussions – surfaces PRs with heavy reviewer activity. - Most check failures – flags builds needing attention. Pro Tips - Use shorter date ranges (7–14 days) to monitor sprint performance. - Tag PRs consistently (feature, bugfix, refactor) to correlate cycle time by category in Distributions. - Combine with Allocations and Activity pages to spot bottlenecks between review and merge. - Check AI Insight weekly for auto-summarized team health reports. Troubleshooting - Missing PRs? Ensure the repository is connected and synced. - No data for ‘Merged to Staging’? You may need to assign which branches count as staging in settings. - Unexpectedly low ‘In Review’ times? Automated reviews by Optibot are included; filter them out for human-only review times. - Check Failure Rate = 0%? Confirm CI checks are enabled in your GitHub workflow.

Last updated on Oct 29, 2025

AI Insights | Automatic summaries of engineering team trends

AI Insights provides instant summaries of trends, bottlenecks, and outliers across your Insights dashboards. It’s available via the purple ‘AI Insight’ button on most views. Each summary includes real metrics, percentage changes, and practical next steps — helping engineering leads, managers, and CTOs see the story behind the numbers in seconds. Overview AI Insights analyzes the data on any Insights dashboard you’re viewing — whether it’s PR Cycle Time, Activity, Contributors, or Distributions — and translates it into natural-language insights. It automatically applies all the filters you’ve already set (date range, repositories, teams, and users), so every summary is contextual and accurate to what you’re looking at. The panel provides five key sections: - TL;DR — A short, readable summary of overall trends and performance changes since the last time window. - Trends — Highlights percentage increases or decreases across your main metrics. - Notable patterns — Points out specific behaviors such as who’s reviewing most often, when work tends to happen (weekends, late nights), or whether large PRs are becoming more common. - Actionable insights — Plain-English recommendations on what to investigate or improve next (e.g., “Review backlog is rising — redistribute review load across the team.”). - Context — Shows your selected date range, repositories, and teams, with metric definitions so you always know what’s being measured. The goal is to help you: - Quickly brief executives or stakeholders with a one-paragraph summary - Identify team bottlenecks or process issues without digging through multiple graphs - Spot outliers early (e.g., unusually large PRs or skipped reviews) - Give your tech leads actionable context for their next sprint review Where to find it PR Cycle Time → Click ⚡ AI Insight (top-right) to open the right-rail analysis. Activity (bubble timeline) → click ⚡ AI Insight to summarize per team or person; supports Week / Month view. Contributors → click ⚡ AI Insight for an individual’s highlights, efficiency signals, and coaching prompts. Allocations / Distributions / Issues / Time in Status → where available, the same ⚡ AI Insight button opens a page-aware analysis. *Tip: The analysis always respects the page’s date picker, team/user filter, and repository filter. How it works 1. You set the view Choose the time window (e.g., last 14 days), team/user, and repositories. PR pages support Average / Median (via the Settings icon) to reduce outlier skew. 2. Click "AI Insight" We compute deltas vs. the prior comparable window and scan the selected data for patterns: review behavior, PR sizes, merge cadence, comment density, rework hotspots, weekend activity, etc. 3. We generate the panel The right-rail shows Date Range, TL;DR, Activity/PR Trends, Notable Trends, and Actionable Insights. 4. You drill down From the PR table: sort by Longest review time, Most discussions, Most check failures, Reworks to validate the suggestions. From Activity: use the legend (Commit, PR Review, PR Open, Merge, Comment) and Week/Month toggles to see who did what, when. From Contributors: use cards like Average PR Cycle Time, Coding Days, Time to First Review, Lines Added/Deleted to corroborate the narrative. What AI Insights covers (by page) PR Cycle Time AI Insight identifies which parts of your PR process are speeding up or slowing down — and why. - Tracks Time to Open, In Review, and Time to Merge separately. - Surfaces likely causes for slowdowns, such as large PR sizes, review backlogs, or CI failures. - Flags extreme outliers and suggests switching to Median to get a fairer average. - Calls out Reworks (code churn) and Check Failure Rate, both of which can signal quality issues or unstable branches. Activity (bubble timeline) Gives a visual summary of the week’s development activity across commits, merges, reviews, and comments. - Highlights review balance (who’s reviewing most vs. least). - Detects weekend or after-hours work patterns. - Identifies collaboration trends — for instance, if one reviewer consistently handles most PRs. - Can be viewed weekly or monthly for trend shifts. Contributors Provides an AI-generated coaching summary for each engineer: - Highlights weekly or monthly activity trends. - Evaluates Coding Days, Average PR Size, Time to First Review, and Lines Added/Deleted. - Suggests actions like: “Establish a code review rotation” or “Encourage smaller PRs to improve velocity.” - Useful for 1:1s, performance reviews, and spotting early blockers. Allocations / Distributions When your team uses labels like feature, tech-debt, security, or bug in GitHub or Jira, AI Insight breaks down your engineering effort by category. This helps leaders understand: - How engineering time is distributed across different work types. - Whether your team’s focus aligns with current roadmap goals. - How consistent label usage affects visibility of priorities. Why teams use it AI Insights is most valuable when: - You want executive-level summaries without manually compiling data. - You need team-specific coaching points before sprint retros. - You’re tracking process improvement over time — e.g., faster reviews or fewer reworks. - You’re running multi-repo organizations and need quick context switching across teams. It saves hours of manual data analysis each week, allowing leads to focus on making decisions rather than pulling numbers. Pro tips - Use Median for fairness — Outliers can make averages look worse than they are. Median smooths out one-off spikes. - Review the outliers — Sort PR tables to find which ones drove the metrics up or down. - High Rework doesn’t always mean bad code. It can indicate a major refactor or architectural cleanup — use it as a discussion point, not a penalty. - Standardize labels. Consistent GitHub/Jira labeling improves your Distributions view and makes AI summaries more meaningful. - Show it live in reviews. Use AI Insights during sprint demos or weekly leadership syncs — the TL;DR is built for quick storytelling. Notes & behavior - Read-only: AI Insight doesn’t modify or store data beyond your dashboard context. - Auto-refresh: Summaries update as your repositories sync. - Supports comparisons: Always compares your selected window to the previous period. - Privacy-safe: Aggregates review, commit, and AI adoption data without identifying individuals in sensitive metrics. Troubleshooting - If the panel shows generic text, make sure your selected period contains activity. - Switch to a longer date range (e.g., 30 days) if the dataset is too small. - For missing data, verify your GitHub and Jira integrations are synced and active. - Distributions showing “Unlabeled”? — Add consistent issue/PR labels. - For unexpected deltas, toggle from Average → Median in Settings. Cross-links - Getting Started with Optimal AI - Engineering Productivity Insights

Last updated on Oct 30, 2025

Contributors | A detailed view of every engineer’s contribution

Overview The Contributors view is your team’s individual performance lens inside Optimal AI Insights. It turns raw GitHub activity into a clear, contextualized report about how each engineer is contributing to your team’s velocity, collaboration, and delivery quality. Each contributor’s page tells a complete story: how often they code, how fast their PRs move through review, how they collaborate across teams, and how their efficiency trends over time. It’s designed for engineering managers, tech leads, and team leads who want to understand not just “how much” someone is working but how their workflow affects the team’s rhythm and throughput. In practice, this page functions as an auto-generated weekly report for each contributor, powered by live GitHub data and Optimal AI’s AI Insight engine. What You’ll See 1. Contributor Summary At the top, you’ll see an at-a-glance view of the engineer, their name, role, and associated location along with a date range selector. Changing the range recalculates every metric, so you can zoom into a sprint or expand to a quarter for trend analysis. The dropdown makes it easy to switch between individuals, enabling quick comparisons or 1:1 preparation between multiple engineers on the same team. 2. Highlights The Highlights panel summarizes what changed during the selected period. It tracks short-term improvements and areas that might need attention, for example: - Faster turnaround on PRs (reduced cycle time) - Increase or dip in coding days - Efficiency changes over time - Review speed or collaboration shifts Each highlight is automatically contextualized: it compares the current week or month to the previous period and frames the result in terms of impact. This gives managers a high-signal summary that can be used directly in weekly check-ins or retrospectives. 3. Efficiency Score The Efficiency Score is Optimal AI’s composite productivity indicator. It reflects how balanced an engineer’s activity is across the core developer workflow from coding, opening PRs, and reviewing others’ work. Rather than focusing on volume alone, it looks at the mix of contribution types to highlight healthy, well-rounded patterns. For example, an engineer who both pushes code and consistently reviews others’ PRs will naturally have a stronger efficiency balance than someone focused on one dimension only. This score is normalized within each team, so contributors can see how their current patterns compare to the broader group trend. 4. Team Performance The Team Performance panel benchmarks each contributor against their team or functional group. It shows ranking or percentile indicators (e.g., “Top Engineer in Engineering Core based on Cycle Time”), giving quick visibility into top performers or areas needing support. This benchmark helps leaders identify coaching opportunities and distribute review or merge load more fairly. Knowing where each person’s work cadence sits relative to peers with similar roles or repos. 5. AI Insight Integration Every contributor page includes a ⚡ AI Insight section that can automatically generate an analysis powered by Optimal AI’s Insight engine. It goes beyond metrics to deliver: - TL;DR summaries that explain what changed and why. - Actionable insights such as “Encourage diverse collaboration” or “Monitor review load.” - Trend explanations connecting activity data (like smaller PRs or faster review times) to likely workflow improvements. - Summary analyses that quantify shifts in coding days, PR size, and review timing. Instead of manually analyzing trends, you get AI-curated takeaways that help you lead more effective retros, reviews, and 1:1s. The AI Insight layer turns the Contributor page into a live performance narrative blending quantitative data with qualitative interpretation. Metrics & Trends Every metric on this page comes directly from GitHub activity, normalized by the date range you select. They’re designed to give you visibility into not only what’s happening, but how it’s evolving over time. Core Metrics - Average PR Cycle Time — how long it takes from opening to merging a pull request. Useful for spotting bottlenecks or reviewing efficiency gains. - Coding Days — number of unique days with commits in the selected period. Indicates coding consistency and engagement across weeks. - Total Lines Added / Deleted — code churn measure showing how much work has been written, refactored, or cleaned up. - Time to First Review — average time before a contributor’s PR gets its first review. A shorter time usually signals good review responsiveness. - PRs Reviewed / PRs Opened / PRs Assigned — counts of code review engagement and ownership spread. Trend Visualization Below the metrics, a time-series chart visualizes changes across days or weeks. You can switch the metric being visualized from average PR size, PR cycle time, commits, or time-to-first-review to see where patterns emerge. This view helps identify: - Days with high merge or commit activity - Times when review load spikes - Patterns of improvement after process changes or team reorganizations Activity Timeline The Activity chart visualizes the contributor’s daily work in bubble from each event (commit, PR, review, merge, or comment) represented by color-coded circles. This gives you a literal picture of when and how consistently a contributor is active throughout the week or month. It’s especially useful for spotting review backlogs or uneven work distribution during a sprint. Activity types are color-coded to distinguish between coding, reviewing, and commenting actions, giving you both temporal and contextual clarity. Commit Contribution & Review Collaboration These two sections surface deeper patterns in how engineers interact across the organization. - Commit Contribution maps monthly commit volume, highlighting consistency or variability over time. It’s a quick way to see who’s actively coding during certain release cycles. - Review Collaboration lists who reviews each contributor’s PRs most often. This reveals team dependencies, mentorship relationships, and potential review bottlenecks. A healthy collaboration map shows diverse review activity across several teammates, not just one or two reviewers. Together, these charts transform raw GitHub relationships into actionable team insight. Why It Matters Engineering performance isn’t about counting commits — it’s about context. The Contributors page provides that context in real time. It connects quantitative metrics (like PR cycle time) to qualitative patterns (like collaboration and review balance). For managers, it means better 1:1s, clearer performance discussions, and faster recognition of improvement. For engineers, it means visibility into their own workflow — what’s improving, where bottlenecks are forming, and how their work stacks up within the team. Optimal AI’s Contributors view bridges the gap between individual productivity and team-level velocity, using AI-generated insights to keep both aligned. ProTips - Use this page in weekly 1:1s or retros to frame discussions around measurable progress. - Pair it with PR Cycle Time to understand how individual performance affects overall velocity. - Watch for review concentration — if one contributor reviews most PRs, consider load-balancing. - Encourage contributors to reflect on their AI Insight summaries as part of self-assessment. - Combine this view with Allocations or Distributions to connect effort with engineering investment types (e.g., feature vs. tech debt). Troubleshooting - No data shown: Confirm GitHub sync is active and the contributor has commits or PRs within the selected time range. - Efficiency score missing: Ensure the contributor has both code and review activity; the score needs both inputs. - Team comparison unavailable: Check if the contributor is assigned to a defined team in GitHub or Optimal AI. - Activity chart flat: Expand to a larger time window or confirm event filters (commit/review/comment) are enabled. In Summary The Contributors page is your team’s heartbeat at the individual level. It combines precision metrics, AI interpretation, and historical context so you can see not just what engineers did, but how their actions shape team outcomes.

Last updated on Nov 03, 2025