Code Review

AI-powered code review with Claude, Codex, and more

Overview

Agentastic includes integrated code review powered by AI agents. Review your changes before committing or get suggestions on existing code. Multiple agents can run in parallel, each providing a different perspective.

Supported Agents

Claude Code

Anthropic's Claude provides intelligent code review with deep understanding of code patterns and best practices.

Command: claude "$(cat 'prompt_file')"

Requirements:

  • Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
  • Anthropic API key configured

Codex

OpenAI's Codex agent for code analysis and suggestions.

Command: codex review "$(cat 'prompt_file')"

Requirements:

  • Codex CLI installed
  • OpenAI API key configured

CodeRabbit

CodeRabbit integration for automated PR reviews and code quality checks.

Command: coderabbit review --plain

Requirements:

  • CodeRabbit CLI installed
  • CodeRabbit account connected

Cursor Agent

Cursor's AI agent for code review.

Command: cursor-agent --model auto --print "$(cat 'prompt_file')"

Using Code Review

Review Current Changes

  1. Click the Code Review button in the toolbar
  2. Select which agent(s) to use (or use your defaults from Settings)
  3. The agent runs in a new terminal tab
  4. Review the feedback in the terminal output

From the Toolbar

  • Click - Run review with enabled agents from Settings
  • Hold/Menu - Select specific agent(s) to use

What Gets Reviewed

The review prompt includes:

  • Current branch vs target branch comparison
  • Complete unified diff of all changes
  • Commit history between branches
  • Review criteria prioritization

Review Criteria

Agents are prompted to prioritize:

  1. Bugs - Logic errors, edge cases, null handling
  2. Security - Vulnerabilities, input validation, secrets
  3. Performance - Inefficiencies, memory leaks, N+1 queries
  4. Maintainability - Code clarity, documentation, patterns
  5. Testing - Test coverage, test quality

Multi-Agent Reviews

Enable multiple agents to get diverse perspectives:

  1. Go to Settings > Code Review
  2. Enable the agents you want to use
  3. Click Review - all enabled agents run in parallel
  4. Each agent opens in its own terminal tab

Different agents catch different issues:

  • Claude excels at understanding intent and architecture
  • Codex focuses on code patterns and best practices
  • CodeRabbit specializes in PR-specific feedback

Adding Custom Agents

Add your own review agents to integrate team tools or alternative AI services.

Configuration

  1. Go to Settings > Code Review
  2. Scroll to Custom Agents
  3. Click Add Agent
  4. Enter:
    • Name: Display name for the agent
    • Command: Shell command to execute

Command Template

Use {prompt} as a placeholder for the review prompt:

my-agent review --prompt "{prompt}"

If {prompt} is not in your command, the prompt is passed via a temporary file:

my-agent "$(cat 'prompt_file')"

Examples

Local LLM with Ollama:

ollama run codellama "$(cat 'prompt_file')"

Custom script:

./scripts/review.sh "{prompt}"

Team review tool:

team-reviewer --format=terminal "{prompt}"

Configuration

Access code review settings in Settings > Code Review:

SettingDescription
Enabled AgentsToggle which built-in agents to use
Custom AgentsAdd your own review commands

Review Workflow

A typical review workflow:

  1. Make your code changes
  2. View changes in the Diff Viewer
  3. Click Review to get AI feedback
  4. Address suggestions
  5. Commit when satisfied

Tips

  • Review smaller changes for more focused feedback
  • Use multiple agents for diverse perspectives
  • Custom agents can integrate with your team's tools
  • Combine AI review with human review for best results
  • Check the terminal for detailed feedback and suggestions