Home Optibot Config
đź”§

Optibot Config

Optibot's numerous configuration options for controlling the context depth, automations and agentic workflows.
By Optimal Engineering
• 7 articles

Getting started with the optibot config file

The .optibot configuration file is a JSON file that controls how Optibot behaves in your repository. By customizing this file, you can: - Enable or disable automatic code reviews for pull/merge requests - Control which review categories are included or excluded - Customize summary generation to match your team's needs - Configure dependency bundling recommendations - Enable CI/CD pipeline fixes automatically - Reference custom coding guidelines for context-aware reviews - Exclude specific users or labels from Optibot's analysis This guide will walk you through each configuration option with detailed explanations and practical examples. Optibot Configuration Options 1. Summaries - Settings to control the depth, workflow triggers and exclusions for optibots summaries. 2. Reviews - Settings to control code generation, workflow triggers, focus areas and automations for optibots agentic review capabilities. 3. Dependency Bundler - Settings to control control optibots ability to intelligently manage your incoming dependabot dependencies. 4. Automated CI Fixer [Beta] - Settings to control optibots ability to fix issues in your github action checksuite. Creating your .optibot file Create a file named .optibot (note the dot prefix) in the root directory of your repository. Each repo can be configured with their own optibot configuration file. You can create the file directly using VSCode/Cursor/Jetbrains or Windsurf like so: Alternatively you can also create the file using this terminal command. # In your repository root touch .optibot You can also view this configuration inside the product at agents.getoptimal.ai Dashboard → Documentation → Advanced Configuration. Adding a basic configuration for Optibot Most optibot users opt in for a basic configuration that lets them generate summaries and reviews automatically. Please note that .optibot configuration files must be committed and maintained on your main branch. Here's the .optibot file for it. { "reviews": { "auto": true }, "summary": { "auto": true, "level": "basic" } } Once you have the above file created. Simply git push and commit into your main branch. git add .optibot git commit -m "Add Optibot configuration" git push origin main Once the file is committed to your main branch, the configuration will take effect immediately for new pull/merge requests. Below is a more extensive example of a .opitbot file. { "reviews": { "auto": true, "autoApprove": true, "codeSuggestions": true, "codeSuggestionsSkipFiles": ["*.md", "docs/*", "vendor/*"], "excludedLabels": ["wontfix", "low-priority"], "excludedUsers": ["bot-account"] }, "summary": { "auto": true, "level": "detailed", "excludedLabels": ["wontfix", "low-priority"], "excludedUsers": ["bot-account"] }, "dependencyBundler": { "enabled": true }, "enableCIFixer": true, "guidelinesUrl": "docs/guidelines/README.md" }

Last updated on Dec 08, 2025

Optibot configuration for Reviews

Configuration Parameters for Optibot Configuration options for code reviews In the .optibot configfuration file the reviews object controls automated code review functionality. Complete Structure { "reviews": { "auto": false, "autoOnPush": false, "autoOnDraft": false, "exclude": [], "include": [], "autoApprove": false, "codeSuggestions": true, "codeSuggestionsSkipFiles": [], "excludedLabels": [], "excludedUsers": [] } } Auto Reviews on PR Creation Type:boolean Default: false Description: Controls whether Optibot automatically reviews pull/merge requests when they are created or updated. When to use and impact: - Set to true if you want immediate feedback on all pull requests when they are first opened. Optibot will generate a thoughtful code review of your changes. - Set to false if you prefer to manually trigger reviews which can be done in plain text on github and gitlab comments by typing "Optibot can you review this" Example: { "reviews": { "auto": true } } Automatic Reviews on New Commits (autoOnPush) Type: boolean Default: false Description: You can now configure Optibot to automatically re-review a pull or merge request every time you push new commits. This gives you continuous feedback as you update the PR, without needing to manually ask Optibot to review again. To enable this, add autoOnPush: true inside the reviews section of your .optibot configuration file. When to use and impact: - Ideal when you push multiple follow-up commits to address feedback. - Ensures each update is automatically re-reviewed. - Removes the need to manually trigger additional reviews. - Helps teams maintain high-quality, iterative review workflows. Example: { "reviews": { "auto": false, "autoOnPush": true, "autoApprove": false } } Automated PR approvals Type: boolean Default: false Description: Automatically approve pull/merge requests that pass optibots stringent review criteria. This setting gives optibot the ability to automatically approve a pull request that it deems mergeable. When to use and impact: - For trusted contributors or automated dependency updates - When you have strict CI/CD checks and trust Optibot's judgment. - When you regularly push atomic changes (small to medium PR/MR's) to branches. - When you require a multiple reviews on a PR/MR to merge. An optibot approval can be one of the core approvers on your team. Example: { "reviews": { "auto": true, "autoApprove": true } } ⚠️ Important Considerations: - Use carefully - this gives Optibot approval authority. If you use a CI/CD pipeline with a lack of tests or merge directly into main. - Best combined with excludedLabels or excludedUsers for controlled auto-approval - Consider your team's review policies before enabling. Our Recommended Setup: { "reviews": { "auto": true, "autoApprove": true, "excludedLabels": ["needs-review", "breaking-change"], "excludedUsers": ["junior-devs"] } } Optibot Code Suggestions Type: boolean Default: true Description: This lets optibot suggest inline code suggestions and improvements. These suggestions can show up as either committable changes or show up as code change recommendations. These are turned on by default. When to use and impact: - Set to true for actionable code improvement suggestions. - Set to false if you only want high-level feedback without specific code changes. This is only useful if you either a) have an internal mandate to not have any AI generated code suggestions for your repo or b) going through a process of training engineers to not be dependent on AI generated code fixes for reviews. Example: { "reviews": { "auto": true, "codeSuggestions": true } } Skip optibot reviews on specific files Type: array of strings (glob patterns) Default: [] Description: This setting tells optibot to skip reviewing certain files based on patterns. When enabled optibot will not generate code suggestions on those specific files. This will also stop optibot from reading those files during a review. When to use: - Exclude auto generated files, test files, or documentation - Focus suggestions on production code only and ignore compiled code in reviews. - Skip files that change frequently or follow different conventions - Skip files that have specific legal requirements against AI generated code enforcement. Some regulated sectors require that specific files can only be edited and maintained by humans. Examples: // Example 1: Skip test files and documentation { "reviews": { "auto": true, "codeSuggestions": true, "codeSuggestionsSkipFiles": ["*.test.js", "*.test.ts", "*.md", "docs/**/*"] } } // Example 2: Skip generated and configuration files { "reviews": { "auto": true, "codeSuggestions": true, "codeSuggestionsSkipFiles": [ "*.generated.*", "dist/**/*", "build/**/*", "*.config.js", "package-lock.json" ] } } // Example 3: Skip vendor and third-party code { "reviews": { "auto": true, "codeSuggestions": true, "codeSuggestionsSkipFiles": [ "vendor/**/*", "node_modules/**/*", "third_party/**/*" ] } } Glob Pattern Guide: - *.ext - All files with extension in current directory - **/*.ext - All files with extension in any directory - dir/**/* - All files under a specific directory - *.test.* - Files with .test. in the name Exclude optibot reviews on specific Pull Request labels Type: array of strings Default: [] Description: This setting stops optibot from reviewing code changes in pull/merge requests that contain specified labels of your choice. Labels are case sensitive and must have an exact name match when configuring this setting. When to use: - Skip reviews for work-in-progress changes - Skip reviews on autogenerated PR's, such as those generated by Snyk, Dependabot, Sonarqube, etc - Exclude draft or experimental branches - Bypass reviews for hotfixes or emergency changes Examples: // Example 1: Skip WIP and draft PRs { "reviews": { "auto": true, "excludedLabels": ["WIP", "work-in-progress", "draft"] } } // Example 2: Emergency and hotfix bypass { "reviews": { "auto": true, "excludedLabels": ["hotfix", "emergency", "critical-fix"] } } // Example 3: Exclude documentation-only changes { "reviews": { "auto": true, "excludedLabels": ["documentation", "docs-only", "readme-update"] } } Platform-specific label names: - GitHub: Use exact label names (case-sensitive) - GitLab: Use exact label names (case-sensitive) Exclude Optibot reviews from specific users Type: array of strings Default: [] Description: This setting prevents optibot from reviewing pull/merge requests from specified users. This ensures that when your specified user(s) creates a pull or merge request, optibot will not review their code even when you have reviews set to auto. When to use: - Exclude bot accounts (Dependabot, Renovate, etc.) - Skip reviews for trusted senior developers - Bypass automated deployment accounts Examples: // Example 1: Exclude dependency bots { "reviews": { "auto": true, "excludedUsers": ["dependabot", "renovate", "renovate[bot]"] } } // Example 2: Exclude CI/CD and release bots { "reviews": { "auto": true, "excludedUsers": [ "github-actions[bot]", "gitlab-ci", "release-bot", "deployment-bot" ] } } // Example 3: Exclude specific team members { "reviews": { "auto": true, "excludedUsers": ["senior-architect", "tech-lead"] } } Platform-specific usernames: - GitHub: Use the exact username (e.g., dependabot[bot] for GitHub Apps) - GitLab: Use the exact username Exclude optibot reviewing draft pull requests Type: boolean Default: false Description: This setting prevents optibot from reviewing pull/merge requests that have their state set to draft i.e pull requests that are opened as drafts. By default optibot does not review draft pull requests when they are opened. Reviews only execute when you set the status of the PR to "Ready to review" on github. This works similarly for gitlab. Merge requests that are set to draft will follow the same convention as github. When to use: - When you want optibot to continually review your work in progress pull or merge requests. Example { "reviews": { "auto": true, "autoOnDraft": true, "autoApprove": true, "codeSuggestions": true } }

Last updated on Dec 08, 2025

Optibot configuration for Summaries

Summary Configuration The optibot summary object controls automated pull/merge request summary generation. Optibot summaries can be configured to be either short and sweet or highly technical. This part of your documentation covers all the possible controls for summarizations. Complete Structure { "summary": { "auto": true, "level": "basic", "autoOnDraft": false, "excludedLabels": [], "excludedUsers": [] } } Auto Pull Request and Merge Request summaries Type: boolean Default: true Description: This setting is used to enable or disable automatic summary generation for pull/merge requests. By default PR/MR summaries are generated when a new PR/MR is opened. Our summaries cover in detail the functional aspects of the code changes as well as a toggle section with a description of file by file changes. When to use and impact: - Set to true for automatic summaries on all PRs. This helps reviewers understand code changes at a glance and can completely bypass the need to write up a PR/MR description at times. - Set to false if summaries should be manually requested. While we don't recommend this, summaries can be turned off if you have very specific PR/MR description requirements. Example: { "summary": { "auto": true } } Summary detail levels Type: string Default: "basic" Allowed Values: "short", "basic", "detailed" Description: This setting controls the level of detail that optibot includes in its summaries. The default setting is basic. Summary Levels Explained: short Provides a concise, 2–3 sentence natural-language overview of the pull request. Ideal for quick updates that don’t interrupt workflow or when reviewers only need a high-level summary. basic Contains brief overview of changes, key modifications, and an impact summary. This level of detail is great if you have small PRs that require quick reviews or have the habit of creating atomic code changes. detailed Contains comprehensive analysis with file-by-file breakdown, architectural impact, and testing recommendations. This level of detail is great for large PRs, complex features and critical changes. Setting Examples: // Example 1: Short summaries for fast updates { "summary": { "auto": true, "level": "basic" } } // Example 2: Basic summaries for fast-paced team { "summary": { "auto": true, "level": "basic" } } // Example 3: Detailed summaries for complex codebase { "summary": { "auto": true, "level": "detailed" } } Sample Output Comparison: Short Level: Summary: Updates login flow to streamline authentication and add basic validation. Cleans up unused logic and prepares the service for upcoming enhancements. Basic Level: Summary: This PR adds user authentication middleware and updates the login endpoint. Key Changes: 2 new files, 3 modified files, ~150 lines changed. Detailed Level: Summary: This PR implements JWT-based authentication middleware with refresh token support. Files Changed: - src/middleware/auth.ts (new): JWT verification middleware with role-based access control - src/routes/auth.ts (modified): Added refresh token endpoint - src/utils/token.ts (new): Token generation and validation utilities Impact Analysis: - Security: Implements industry-standard JWT authentication - Performance: Adds ~10ms latency per authenticated request - Testing: Requires new integration tests for auth flows Dependencies: Added [email protected] and [email protected] Exclude summaries by label Type: array of strings Default: [] Description: This setting stops optibot from generating summaries on pull/merge requests with your specified labels. When to use and impact: - Skip summaries for trivial changes - Exclude documentation-only PRs - Bypass summaries for automated updates Examples: // Example 1: Skip trivial changes { "summary": { "auto": true, "level": "basic", "excludedLabels": ["trivial", "typo-fix", "formatting"] } } // Example 2: Exclude docs and dependencies { "summary": { "auto": true, "level": "detailed", "excludedLabels": ["documentation", "dependencies"] } } Exclude Summaries by user Type: array of strings Default: [] Description: This setting stops optibot from generating summaries on pull/merge requests for your specified users. This especially helpful when you have several bot accounts generating code changes that are trivial in nature. When to use and impact: - Exclude bot accounts that create automated PRs - Skip summaries for specific team members if needed Example: { "summary": { "auto": true, "level": "basic", "excludedUsers": ["dependabot", "renovate[bot]", "l10n-bot"] } } --- Exclude optibot summaries on draft pull requests Type: boolean Default: true Description: This setting prevents optibot from leaving a summary on pull/merge requests that have their state set to draft i.e pull requests that are opened as drafts. By default optibot will automatically generate a summary of your code changes on your draft pull requests when they are opened. When the setting is changed to false, optibot will wait till you set the status of the PR to "Ready to review" on github and then create a summary. This works similarly for gitlab. Merge requests that are set to draft will follow the same convention as github. When to use: - When your draft PR's are truly a work in progress set of changes and you expect to make more pushes to it in this state. Example { "summary": { "auto": true, "autoOnDraft": false, "level": "short" } }

Last updated on Dec 08, 2025

Optibot configuration for auto CI fixes

Optibot CI Fixer The enableCIFixer parameter controls automated CI/CD pipeline issue detection and fixes. When this feature is turned on. Optibot will monitor your CI/CD pipelines which are configured on github actions. If any of your CI/CD checks in the workflow fail, optibot intercepts the error and analyzes the root cause. Once the root cause is determined, it then thinks through the solution and creates a brand new draft PR with a fix along with a detailed explanation of its fix. Parameter Details Type: boolean Default: false Platform Support: ⚠️ GitHub only (GitHub Actions integration) Description: Enables automated analysis and fixing of CI/CD pipeline failures. When to use: - Enable for projects with complex CI/CD pipelines - Useful for catching common pipeline issues automatically - Helps reduce time spent debugging CI failures Example: { "enableCIFixer": true } What it does: - Monitors GitHub Actions workflow runs - Analyzes failed CI checks - Provides suggestions or creates fix PRs for common issues - Identifies configuration problems in workflow files Common Issues it Can Help With: - Missing environment variables - Incorrect workflow syntax - Dependency installation failures - Test configuration issues - Build script errors Requirements: - GitHub Actions must be enabled - Only works with GitHub repositories - Requires appropriate permissions to access workflow runs ⚠️ GitLab Users: This feature is not available on GitLab. Set to false or omit this configuration.

Last updated on Dec 08, 2025

Crafting effective optibot guidelines

What are optibot guideline files? Optibot guidelines are context-engineering documents that enhance your agentic code reviews for your team. Our sweet spot target for the length of the file is between 1200-2,200 words. We suggest prioritizing high-signal rules over comprehensive coverage, and structure content for optimal performance of your optibot agent. Think "high-impact heuristics" rather than a "complete rulebook." How guideline files contribute to Optibots context budget: Optibot uses AI models with large context windows for its code reviews. The default optimization for optibot is to use its context budget on collecting code context and reading files. While theres no limit to how long an optibot guideline file can be, during our evals and research we tend to see a degradation in attention as guideline files get longer. This is typical across models as it tends to cloud the judgement of the model and its effective use of its thinking budgets. The Attention Budget Problem LLMs operate with an "attention budget" similar to human working memory. To put things into perpsective, theres limited context you can hold on to when carrying out a code review. Even if multiple agents are involved, the pass through of information across expert reviewers can result in the following deficiencies: - Context rot: Accuracy decreases as context length increases - n² complexity: Transformers create pairwise relationships between tokens—more tokens means thinner attention spread across all information - Diminishing returns: Every additional token competes for the model's limited attention In Optibot your guidelines compete with: patch content, file context, conversation history, system prompts, and tool outputs. Therefore we highly recommend that every token counts. The Goldilocks Zone: Right-Sizing Your Guidelines Too Short (<200 words) - Vague guidance. For example "Write clean rails code" - Lacks project-specific context - Forces model to rely on generic patterns - Misses critical architectural decisions Too Long (>3,000 words) - Dilutes high-priority rules with noise - Creates context rot—critical rules get lost - Repeats and enforces obvious patterns models already know - Wastes thinking budget on redundant info Just Right (1,200-2,200 words) - Clear, enforceable heuristics - Project-specific architectural patterns. Highly enforceable shared values amongst internal engineering teams. - Prioritized by impact (security > style). For example inserting linting guidelines in the reviewer vs running the linter in a check suite - Focuses on what's unique to your codebase - Highly useful when tackling infrastructure as code, microservices or external patterns not present in the repo itself. Rule of thumb: If your guideline would apply to 90% of projects, optibot probably knows it already. Focus on what makes your codebase unique. What to Include: High-Signal Rules 1. Architecture-Specific Patterns (Highest Priority) Define patterns unique to your system architecture. Highly specific, enforceable, this is proven to prevent actual bugs in your architecture based on our evals and testing. Here are some commonly seen examples across our customer base. Example for a microservices architecture: ### Service Communication** - All inter-service calls MUST use the message queue, not direct HTTP - Service responses must include correlation IDs for tracing - Timeout all external service calls at 5 seconds with a circuit breaker pattern Example for a monolithic application: ### Database Transaction Pattern** - Always use transaction wrappers for multi-table operations - Never instantiate database connections directly—use the connection pool - All data access must go through the repository layer 2. Security & Compliance Requirements Non-negotiable rules with concrete criteria that pertain to internal security practices. Optibot already enforces its own high security policies based on the OWASP 10 out of the box. We suggest adding any standards outside of that in your guidelines file. ### Multi-Tenant Data Isolation - All database queries MUST include tenant_id in WHERE clause - Cross-tenant data access requires explicit audit logging - Admin operations must validate tenant context before execution - Data exports must be scoped to single tenant with verification step ### PCI Compliance for Payment Data - Payment card data never stored in application database - Tokenized references only—tokens must expire after 15 minutes - All payment processing logs must exclude card details (mask to last 4 digits) - Failed payment attempts trigger security review after 3 failures 3. Technology-Stack Conventions Any specific rules that are unique to your tools and frameworks. We highly recommend setting this up if you use internal packages, forked and unsupported libraries that you self maintain or even large utility and helpers that are a core part of your tech stack. Example for a typed language (TypeScript, Java, C#): ###Types Management - Place shared type definitions in `/types` or `/models` folder - Local types stay in the same file where used - Avoid using `any` or equivalent—prefer strongly typed alternatives - Enable strict type checking in your configuration Example for error handling: ### Error Handling Pattern** - All exceptions must extend our custom ApplicationError base class - Include error codes for client categorization (AUTH_001, VAL_002, etc.) - Never expose internal error details to clients—use sanitized messages - Log full error context server-side for debugging 4. Domain-Specific Logic These include business rules or domain constraints unique to your application that are not clearly defined in code. This means that within your repo there's no presence of either a data structure, logical readable code optibot can access pertaining to these practices. This can include requirements of how you conduct business, standardized compliance in your industry or a unique enforcement of processes that need to be represented in code. Example for a fintech application: ### Transaction Processing Rules - All financial transactions must be idempotent with unique transaction IDs - Account balance updates require pessimistic locking to prevent race conditions - Transactions over $10,000 must trigger AML (Anti-Money Laundering) review workflow - Failed transactions must maintain full audit trail for regulatory compliance - Currency conversions must use rates locked at transaction initiation time ### Ledger Integrity - Double-entry bookkeeping: every debit must have corresponding credit - Account balances must be calculated from ledger entries, never cached - Ledger entries are immutable—corrections require new offsetting entries Example for a healthcare tech system: ### HIPAA Compliance for PHI (Protected Health Information) - Patient data access must be logged with user ID, timestamp, and reason - PHI cannot be transmitted without end-to-end encryption (TLS 1.3+) - All patient records require explicit consent flags before sharing - Data retention: medical records must be retained for minimum 7 years - De-identification must remove 18 HIPAA identifiers before using data in analytics ### Clinical Data Validation - Lab results outside normal ranges must flag for physician review - Prescription dosages must validate against patient weight and age - Drug interaction checks required before prescription confirmation - Allergy checks mandatory before any medication-related operations What to Exclude: Low-Signal Noise Optibot already knows alot about generic best practices across different languages and thousands of libraries and frameworks. Its best to avoid reiterating on these in your guideline files. Here are some clear don'ts. Skip Generic Best Practices and Linting Rules - "Use descriptive variable names" - "Add comments to complex code" - "Write unit tests for business logic" - "Use 2 spaces for indentation" - "Maximum line length 100 characters" - "Add trailing commas in multi-line objects" Skip Language Basics - "Use async/await for asynchronous code" - "Avoid deprecated language features" - "Close database connections after use" - "Handle exceptions properly" Avoidable Anti-patterns like Meta-Instructions for the AI. Avoid directing the AI's behavior: - "Pay special attention to..." - "Always check for..." - "Remember to consider..." Structure for Optibot guidelines : optimizing for best results Structure your guidelines with the most critical rules first. This signals priority and ensures reviewers (both human and AI) immediately understand what matters most to your team. Leading with high-impact rules sets the tone for the entire document. Recommended Document Structure # [Your Project Name] Coding Guidelines** ## Critical Architectural Rules [Top 5-10 most important, non-negotiable patterns] ## Security & Compliance [Security requirements, auth patterns, data protection] ## Technology Stack Patterns [How you use your specific frameworks and tools] ## API Design Conventions [REST patterns, error handling, response formats] ## Domain-Specific Logic [Business rules unique to your domain] Writing Style: Be Specific and Actionable Your guidelines should be concrete enough to catch real issues, but flexible enough to handle legitimate edge cases. This means avoiding both vague language and rigid language. In our tests optibot caught more production issues when optibot's guidelines followed verbiage that was more action based. ❌ Language that results in degraded results and noisier reviews: Vague Language - covering what models already know ### Code Quality - Write clean, maintainable code - Follow best practices - Ensure proper error handling Rigid Language - brittle rules with endless exceptions. Optibot will struggle with edge cases you didn't anticipate. - If a service needs database access, inject DatabaseService in the constructor - If it needs HTTP calls, inject HttpClient in the constructor - If it needs both, inject both in the constructor - Never inject more than 5 dependencies - If you need more than 5, create a facade service - Unless it's a utility service, then inject directly - Except for background jobs, which should use service locator pattern... ✅ Language that resulted in higher code quality and code hygiene Best Practice is to be specific and Flexible - back your guidelines with code examples or mention internal patterns ### Service Dependencies (Dependency Injection Required)** - All external dependencies (database, HTTP, cache) must be injected via constructor - Avoid direct instantiation of services—makes testing impossible - More than 4-5 dependencies suggests the service has too many responsibilities ### Multi-Tenant Database Queries** - Every query touching user/customer data MUST filter by tenant_id - Cross-tenant operations require explicit admin authorization check - Example violation: `SELECT * FROM orders WHERE user_id = ?` (missing tenant_id)

Last updated on Dec 09, 2025

Getting started with Optibot Guidelines

Getting Started 1. Start Small: Begin with your top 10 most critical rules 2. Iterate: Add rules based on recurring PR feedback over time 3. Guidelines size: Keep guidelines files between 1200-2200 words 4. Language: Use flexible but specific language. Only cover gaps in optibot's knowledge rather than common patterns. 5. Configure Optibot: Point to your guidelines file in your Optibot configuration 6. Test & Refine: Monitor review quality and adjust guidelines accordingly Remember: Your guidelines are context engineering, not documentation. Focus on what makes your codebase unique, and let Optibot handle the rest. How do I link to my guidelines file in the .optibot config: In order for optibot to incorporate your guideline file into your code review process. It must be present as markdown within your repository on github or gitlab. Guideline files have a 1-1 relationship with your repositories. Each repository should have its own guidelines file. In order to effectively use an optibot guidelines file it must be added to your .optibot config like so. In this example our file is called optibot-guidelines.md and its situated in our optimal-monorepo repo. Example config { "guidelinesUrl": "https://github.com/OptimalRepo/optimal-monorepo/blob/development/optibot-guidelines.md", } Measuring guidelines success Your guidelines are effective when: 1. Optibot catches issues you care about (architectural violations, security gaps) 2. False positive rate is low (reviews are relevant, not nitpicky) 3. Engineers reference guidelines in PR discussions (they're actually useful) 4. New team members onboard faster (guidelines encode tribal knowledge) Here are some metrics that you can track over 2-4 weeks: - Number of guideline-violation comments per PR - Numbers of comments and discussions with optibot per PR - Engineer agreement rate with Optibot feedback - Time to resolve violations P.S You can track some of these metrics in the Optimal Insights platform Advanced Documentation on Optibot Guidelines - The science behind optibot guidelines - How to structure your optibot guidelines file

Last updated on Dec 09, 2025