AI Insights provides instant summaries of trends, bottlenecks, and outliers across your Insights dashboards. It’s available via the purple ‘AI Insight’ button on most views.
Each summary includes real metrics, percentage changes, and practical next steps — helping engineering leads, managers, and CTOs see the story behind the numbers in seconds.
Overview
AI Insights analyzes the data on any Insights dashboard you’re viewing — whether it’s PR Cycle Time, Activity, Contributors, or Distributions — and translates it into natural-language insights.
It automatically applies all the filters you’ve already set (date range, repositories, teams, and users), so every summary is contextual and accurate to what you’re looking at.
The panel provides five key sections:
-
TL;DR — A short, readable summary of overall trends and performance changes since the last time window.
-
Trends — Highlights percentage increases or decreases across your main metrics.
-
Notable patterns — Points out specific behaviors such as who’s reviewing most often, when work tends to happen (weekends, late nights), or whether large PRs are becoming more common.
-
Actionable insights — Plain-English recommendations on what to investigate or improve next (e.g., “Review backlog is rising — redistribute review load across the team.”).
-
Context — Shows your selected date range, repositories, and teams, with metric definitions so you always know what’s being measured.
The goal is to help you:
-
Quickly brief executives or stakeholders with a one-paragraph summary
-
Identify team bottlenecks or process issues without digging through multiple graphs
-
Spot outliers early (e.g., unusually large PRs or skipped reviews)
-
Give your tech leads actionable context for their next sprint review
Where to find it
PR Cycle Time
→ Click ⚡ AI Insight (top-right) to open the right-rail analysis.

Activity (bubble timeline)
→ click ⚡ AI Insight to summarize per team or person; supports Week / Month view.

Contributors
→ click ⚡ AI Insight for an individual’s highlights, efficiency signals, and coaching prompts.

Allocations / Distributions / Issues / Time in Status → where available, the same ⚡ AI Insight button opens a page-aware analysis.
*Tip: The analysis always respects the page’s date picker, team/user filter, and repository filter.
How it works
-
You set the view
Choose the time window (e.g., last 14 days), team/user, and repositories. PR pages support Average / Median (via the Settings icon) to reduce outlier skew. -
Click "AI Insight"
We compute deltas vs. the prior comparable window and scan the selected data for patterns: review behavior, PR sizes, merge cadence, comment density, rework hotspots, weekend activity, etc. -
We generate the panel
The right-rail shows Date Range, TL;DR, Activity/PR Trends, Notable Trends, and Actionable Insights. -
You drill down
From the PR table: sort by Longest review time, Most discussions, Most check failures, Reworks to validate the suggestions.
From Activity: use the legend (Commit, PR Review, PR Open, Merge, Comment) and Week/Month toggles to see who did what, when.
From Contributors: use cards like Average PR Cycle Time, Coding Days, Time to First Review, Lines Added/Deleted to corroborate the narrative.
What AI Insights covers (by page)
PR Cycle Time
AI Insight identifies which parts of your PR process are speeding up or slowing down — and why.
-
Tracks Time to Open, In Review, and Time to Merge separately.
-
Surfaces likely causes for slowdowns, such as large PR sizes, review backlogs, or CI failures.
-
Flags extreme outliers and suggests switching to Median to get a fairer average.
-
Calls out Reworks (code churn) and Check Failure Rate, both of which can signal quality issues or unstable branches.

Activity (bubble timeline)
Gives a visual summary of the week’s development activity across commits, merges, reviews, and comments.
-
Highlights review balance (who’s reviewing most vs. least).
-
Detects weekend or after-hours work patterns.
-
Identifies collaboration trends — for instance, if one reviewer consistently handles most PRs.
-
Can be viewed weekly or monthly for trend shifts.

Contributors
Provides an AI-generated coaching summary for each engineer:
-
Highlights weekly or monthly activity trends.
-
Evaluates Coding Days, Average PR Size, Time to First Review, and Lines Added/Deleted.
-
Suggests actions like: “Establish a code review rotation” or “Encourage smaller PRs to improve velocity.”
-
Useful for 1:1s, performance reviews, and spotting early blockers.

Allocations / Distributions
When your team uses labels like feature, tech-debt, security, or bug in GitHub or Jira, AI Insight breaks down your engineering effort by category.
This helps leaders understand:
-
How engineering time is distributed across different work types.
-
Whether your team’s focus aligns with current roadmap goals.
-
How consistent label usage affects visibility of priorities.
Why teams use it
AI Insights is most valuable when:
-
You want executive-level summaries without manually compiling data.
-
You need team-specific coaching points before sprint retros.
-
You’re tracking process improvement over time — e.g., faster reviews or fewer reworks.
-
You’re running multi-repo organizations and need quick context switching across teams.
It saves hours of manual data analysis each week, allowing leads to focus on making decisions rather than pulling numbers.
Pro tips
-
Use Median for fairness — Outliers can make averages look worse than they are. Median smooths out one-off spikes.
-
Review the outliers — Sort PR tables to find which ones drove the metrics up or down.
-
High Rework doesn’t always mean bad code. It can indicate a major refactor or architectural cleanup — use it as a discussion point, not a penalty.
-
Standardize labels. Consistent GitHub/Jira labeling improves your Distributions view and makes AI summaries more meaningful.
-
Show it live in reviews. Use AI Insights during sprint demos or weekly leadership syncs — the TL;DR is built for quick storytelling.
Notes & behavior
-
Read-only: AI Insight doesn’t modify or store data beyond your dashboard context.
-
Auto-refresh: Summaries update as your repositories sync.
-
Supports comparisons: Always compares your selected window to the previous period.
-
Privacy-safe: Aggregates review, commit, and AI adoption data without identifying individuals in sensitive metrics.
Troubleshooting
-
If the panel shows generic text, make sure your selected period contains activity.
-
Switch to a longer date range (e.g., 30 days) if the dataset is too small.
-
For missing data, verify your GitHub and Jira integrations are synced and active.
-
Distributions showing “Unlabeled”? — Add consistent issue/PR labels.
-
For unexpected deltas, toggle from Average → Median in Settings.
Cross-links
-
Getting Started with Optimal AI