Review Generation
Writing Reviews Developers Actually Read
Actionable Over Observational
The difference between a useful code review tool and an ignored one is the quality of its output. Consider two review comments for the same issue:
Bad: "This might have a security issue."
Good: "Line 42: SQL injection risk -- user input interpolated directly into query string. Replace with parameterized query: db.query('SELECT * FROM payments WHERE user_id = $1', [userId])"
The first is observational -- it describes what might be wrong without helping fix it. The second is actionable -- it points to the exact line, explains the problem, and shows the fix. Developers can accept the suggestion in one click.
Comment Anatomy
Every review comment has five components:
| Component | Purpose | Example |
|---|---|---|
| Location | Exact file and line | `src/api/payments/route.ts:42` |
| Severity badge | Visual priority signal | `[CRITICAL]`, `[HIGH]`, `[MEDIUM]`, `[LOW]` |
| Explanation | What's wrong and why it matters | "User input interpolated into SQL. Attackers can inject arbitrary queries." |
| Code suggestion | Diff-formatted fix | Shows before/after as a unified diff |
| Category tag | Filterable concern type | `security`, `bug`, `anti-pattern` |
The code suggestion is the most impactful component. GitHub, GitLab, and other platforms render suggestion diffs as one-click "apply" buttons. If your tool outputs fixes in the right format, developers accept suggestions without manual editing.
Severity Classification
Not all issues are equal. The three-tier priority system tells developers where to focus:
Must-fix -- Blocks merge. Critical severity issues and high-confidence high severity issues. These are security vulnerabilities, data corruption risks, and crash bugs. The PR should not be merged until these are resolved.
Should-fix -- Recommended changes. High severity issues with lower confidence, and medium severity issues. These improve the code meaningfully but aren't blocking. Most PRs have a few should-fix items.
Nice-to-have -- Optional improvements. Low severity issues and style concerns. Including these shows thoroughness but they should never dominate the review. A PR with 2 must-fix items and 20 nice-to-have items should lead with the must-fix items, not bury them.
Comment Limits
Review fatigue is real. A PR with 50 inline comments is overwhelming and counterproductive. The comment generator caps at 15 inline comments per PR. Beyond that, issues are mentioned in the summary without inline placement.
The 15-comment limit is reached by priority: all must-fix comments first, then should-fix, then nice-to-have until the cap. This ensures critical issues always get inline placement.
Fix Suggestions
The fix suggester generates concrete code alternatives for each issue type:
?.) or explicit guardprocess.env.YOUR_SECRETEach fix is formatted as a unified diff that platforms can render as an "apply suggestion" button.
The PR Summary
The summary is what the PR author reads first. It answers three questions:
The summary tone adapts to severity. Critical security issues get urgent language. Minor improvements get encouraging language. A review that says "Great refactor! Two small suggestions..." creates a very different developer experience than "CRITICAL: SQL injection must be fixed."
This is chapter 4 of AI Code Review Agent.
Get the full hands-on course — free during early access. Build the complete system. Your projects become your portfolio.
View course details