Review App
Building the Interface Teams Actually Use
From CLI to Web App
A command-line review tool gets used by the engineers who built it. A web app with good UX gets adopted by the whole team -- including managers who want to see quality trends and leads who want to understand review coverage.
The review app has two primary views: the dashboard (bird's-eye view of all PRs) and the review detail (deep dive into one PR's issues).
The Dashboard
The dashboard answers: "How is our codebase doing?"
PR List -- Recent pull requests displayed as cards. Each card shows the title, author, quality score (color-coded badge), issue count, and time since submission. Green badges (>= 80) are safe. Yellow (60-79) need attention. Red (< 60) need changes.
Quality Trends -- A line chart showing the rolling average score over the last 20 PRs. Per-dimension lines reveal whether security, correctness, maintainability, or performance is driving the trend. A declining security line demands immediate attention.
Quick Stats -- Aggregate numbers that executives care about: average score, PRs needing changes, total issues, and trend direction. These numbers fit in a Slack update or standup summary.
The dashboard is a server component -- it reads review data at render time with no client-side state management. In production, review results would be cached in a database and refreshed when new PRs are reviewed.
The Review Detail View
When you click a PR card, you see the full review. Three sections:
Review Summary
At the top: PR title, author, branch, overall score in a colored gauge, recommendation (APPROVE / REQUEST CHANGES), and the summary paragraph. This is the first thing the developer reads.
Diff Viewer
The center panel shows the unified diff with syntax highlighting. Each file gets a collapsible section. Key features:
Issue Sidebar
The right panel lists all issues with severity and category filters. Click an issue to scroll to its location in the diff. The sidebar also shows per-dimension quality scores as colored bars, giving a quick visual of where the PR struggles.
The API Route
The /api/review endpoint orchestrates the full pipeline:
POST /api/review
Body: { diff: string } or { prId: string }
Pipeline: Parse -> Detect -> Score -> Review
Response: { review, score, trend }The route streams progress via Server-Sent Events so the UI can show phase indicators: "Parsing... Detecting... Scoring... Generating review..." as each stage completes. This prevents the user from staring at a blank screen while the pipeline runs.
Error handling is partial-success friendly: if detection crashes on one file, the route returns results for the files that succeeded. A partial review is more useful than no review.
State Management
The review page uses minimal client state:
No global state management (Redux, Zustand) is needed. The review data comes from the server via the API route. Client state only tracks UI interactions.
Production Considerations
This is chapter 5 of AI Code Review Agent.
Get the full hands-on course — free during early access. Build the complete system. Your projects become your portfolio.
View course details