How to Run Ralph Loops with Claude Code — Step-by-Step Tutorial
What you’ll build
By the end of this tutorial, you’ll have Claude Code running autonomous Ralph loops — structured plan-implement-test-verify-PR cycles that ship features without manual intervention. You’ll go from an empty terminal to a merged pull request in three commands.
Prerequisites:
- Claude Code installed (
npm install -g @anthropic-ai/claude-code) - An Anthropic API key (or a Claude Max subscription)
- A git repository with at least one commit
Step 1: Install Wiggum CLI
npm install -g wiggum-cli Wiggum is a free, open-source CLI that orchestrates the Ralph loop. It generates specs, manages phase execution, and monitors progress — Claude Code handles the actual coding.
Verify the installation:
wiggum --version
Step 2: Initialize your project with wiggum init
Navigate to your project directory and run:
wiggum init
This launches a TUI that walks you through six phases:
- Scanning — Wiggum auto-detects your tech stack: framework, package manager, test setup (unit and E2E), styling, database, ORM, API layer, auth, CI/CD, and more. Each detection includes a confidence score. Zero config files needed.
- Provider selection — Choose your AI provider: Anthropic, OpenAI, or OpenRouter.
- API key — Enter your API key. Wiggum offers to save it to
.ralph/.env.localso you don’t need to re-enter it. - Model selection — Pick models for planning (e.g. Opus) and implementation (e.g. Sonnet). The planning model handles spec generation and architectural decisions; the implementation model handles coding.
- AI analysis — Wiggum runs an agentic analysis of your codebase, producing structured context: entry points, key directories, build/test/dev commands, naming conventions, and project-specific implementation guidelines.
- File generation — Creates the
.ralph/directory with everything the Ralph loop needs.
After init, your project has a .ralph/ directory containing:
.ralph/
ralph.config.cjs # Loop settings, models, CLI preferences
.env.local # API keys (gitignored)
.context.json # Codebase analysis results
prompts/ # Phase-specific prompt templates
guides/ # Architecture docs for the AI
specs/ # Where your feature specs live
scripts/ # feature-loop.sh (the Ralph loop engine)
Step 3: Generate a spec with wiggum new
wiggum new my-feature
This starts a structured AI interview in the TUI. The interview has five phases:
Context phase
Add reference material — URLs, files, or GitHub issues that provide context for your feature. If you have a GitHub issue:
wiggum new my-feature --issue #42
The --issue flag pulls the issue title, body, and labels directly into the interview context. You can pass multiple issues and additional reference URLs:
wiggum new my-feature --issue #42 --issue #38 --context https://docs.example.com/api
Goals phase
Describe what you want to build in plain English. Be as specific or general as you like — the interview will fill in gaps. For example: “Add a REST endpoint for user notifications with WebSocket push, rate limiting, and integration tests.”
Interview phase
The AI asks up to 10 clarifying questions, one at a time. Each question comes with structured answer options you can select from, or you can type a custom answer. The AI uses your codebase context — it reads files, searches for patterns, and explores your directory structure — to ask questions specific to your project, not generic ones.
Example questions you might see:
- “I see you’re using Express with middleware in
src/middleware/. Should the rate limiter be a new middleware or integrated into the existing auth middleware?” - “Your tests use Vitest with fixtures in
src/__tests__/fixtures/. Should I include fixture data for the notification payloads?” - “The existing WebSocket setup in
src/ws/server.tsuses Socket.IO. Should notifications use the same connection or a separate channel?”
Say “done” at any point to skip remaining questions and move to generation.
Generation phase
The AI synthesizes everything — your goals, interview answers, codebase context, GitHub issue details — into a detailed markdown spec saved to .ralph/specs/my-feature.md. The spec includes:
- Feature overview and goals
- Implementation steps with file paths
- Edge cases and error handling
- Testing strategy (unit + E2E)
- Acceptance criteria
Headless mode
For scripting and CI pipelines, skip the interactive interview entirely:
wiggum new my-feature --auto --goals "Add JWT auth middleware with refresh tokens"
The --auto flag generates a spec non-interactively using the goals and any issue context you provide.
Step 4: Execute the Ralph loop with wiggum run
wiggum run my-feature
This is where Claude Code takes over. Wiggum spawns feature-loop.sh — the Ralph loop engine — which orchestrates Claude Code through five distinct phases:
Phase 1: Planning
Claude Code reads your spec at .ralph/specs/my-feature.md along with the codebase context from init. It produces my-feature-implementation-plan.md — a step-by-step checklist of tasks. The plan accounts for your project’s file structure, conventions, and existing patterns.
Phase 2: Implementation
Claude Code works through the implementation plan task by task. Each iteration:
- Reads the checklist and picks the next pending task
- Writes code, tests, and configuration changes
- Commits the work
- Loops until all tasks are complete or
maxIterationsis reached (default: 10)
Phase 3: E2E testing
Claude Code runs end-to-end tests against the implementation. If tests fail, it iterates — fixing either the code or the tests — up to maxE2eAttempts (default: 5).
Phase 4: Verification
Claude Code re-reads the original spec and checks every requirement against the actual implementation. Did it add the endpoint? Does the error handling match? Are edge cases covered? This isn’t a rubber stamp — it’s an explicit confirmation that catches the subtle cases where code “works” but doesn’t match what was specified.
Phase 5: PR review
The behavior depends on your review mode:
manual(default) — Creates a PR and waits for human reviewauto— Auto-reviews the diff against the spec, then creates the PRmerge— Auto-reviews, creates the PR, and auto-merges when checks pass
Set the review mode per run:
wiggum run my-feature --review-mode merge
Run flags
| Flag | Description |
|---|---|
--worktree | Run in an isolated git worktree (recommended for parallel work) |
--resume | Resume from last checkpoint instead of starting fresh |
--review-mode <mode> | manual, auto, or merge |
--max-iterations <n> | Override max implementation iterations (default: 10) |
--max-e2e-attempts <n> | Override max E2E test attempts (default: 5) |
--model <model> | Override the AI model for this run |
Monitoring the loop
The TUI RunScreen shows real-time progress while the loop executes:
- Phase progress bars — Each phase (planning, implementation, e2e_testing, verification, pr_review) shows status (pending, active, completed, failed) with duration
- Iteration counter — Current iteration out of max (e.g. “3/10”)
- Activity feed — Live stream of Claude Code’s actions: file edits, test runs, commits
- Token tracking — Real-time input/output token counts and cache metrics
- Task checklist — Implementation plan tasks with completion status
Background mode
Press Esc to background the loop and return to the Wiggum shell. The loop continues running. Re-attach anytime:
wiggum monitor my-feature
The monitor reads live status from the loop process — same TUI, no interruption to execution. You can also use --stream for a headless line-by-line output instead of the TUI.
Completion summary
When the loop finishes, the RunScreen shows a completion summary:
- Phase durations (how long each phase took)
- Final iteration count
- File changes summary
- Commit history
- PR link (if created)
Configuring Claude Code behavior
Wiggum lets you control how Claude Code runs within the loop via ralph.config.cjs:
loop: {
maxIterations: 10, // max implementation iterations
maxE2eAttempts: 5, // max E2E test retries
defaultModel: 'sonnet', // model for implementation
planningModel: 'opus', // model for planning phase
codingCli: 'claude', // agent for implementation
reviewCli: 'claude', // agent for review phase
reviewMode: 'auto', // manual | auto | merge
claudePermissionMode: 'acceptEdits', // Claude Code permission level
}
The claudePermissionMode controls how much autonomy Claude Code has during loop execution. Options range from default (asks for approval on each action) to bypassPermissions (fully autonomous, no prompts).
Agent mode: autonomous backlog processing
Skip the manual workflow entirely. If you have GitHub issues labeled and ready:
wiggum agent
Agent mode runs a full autonomous pipeline:
- Reads your GitHub backlog — Fetches issues and ranks by priority labels (P0 > P1 > P2) and dependency order
- Selects the highest-priority unblocked issue — Skips shipped work, resumes partial branches
- Assesses feature state — Determines whether to start fresh, generate a plan, resume an existing implementation, or skip
- Generates a spec — Uses the issue context to produce an implementation-ready spec (same quality as
wiggum new) - Runs the full Ralph loop — Plan, implement, test, verify, PR
- Reviews the diff — Checks changes against the spec
- Auto-merges (if review mode is
merge) — Then moves to the next issue
Agent flags
| Flag | Description |
|---|---|
--labels bug,P0 | Filter to specific GitHub labels |
--issues 42,38 | Work only on specific issue numbers |
--max-items <n> | Stop after N completed issues |
--max-steps <n> | Stop after N orchestrator steps |
--review-mode merge | Auto-merge when all checks pass |
--dry-run | Simulate without executing |
--stream | Headless streaming output (for CI) |
--diagnose-gh | Run GitHub connectivity checks |
Running in CI
For fully headless execution in CI pipelines:
wiggum agent --stream --review-mode merge --labels P0
This processes all P0-labeled issues, streams output to stdout, and auto-merges completed PRs.
Tips for effective Ralph loops with Claude Code
Write specific specs. The better your spec, the better Claude Code’s output. “Add user authentication” is vague. “Add JWT-based authentication middleware with refresh token rotation, rate limiting at 100 requests per minute, and integration tests using the existing test fixtures in src/__tests__/” gives Claude Code exactly what it needs.
Use --worktree for isolation. When running a Ralph loop while you’re still working on the same repo, --worktree creates an isolated git worktree so the loop doesn’t interfere with your working directory:
wiggum run my-feature --worktree
Use --resume after interruptions. If a loop gets interrupted (crash, network issue, machine restart), don’t start over:
wiggum run my-feature --resume
This picks up from the last checkpoint instead of re-running completed phases.
Keep your context fresh. If your codebase has changed significantly since init, resync the context:
wiggum sync
This re-scans your tech stack and re-runs the AI analysis without repeating the full init flow.
Why not just use a bash script?
You might have seen bash scripts that wrap Claude Code in a while true loop and call it a “Ralph loop.” Those capture the spirit of autonomous execution but miss the structure that makes it effective.
The difference is phase isolation. A bash script runs Claude Code in a single undifferentiated pass — if something goes wrong, you restart from scratch. The Ralph loop separates planning from implementation from testing from verification. Each phase can succeed or fail independently, with its own retry logic and error handling.
Full comparison: Wiggum CLI vs Ralph Wiggum bash scripts
Quick reference
| Command | What it does |
|---|---|
wiggum init | Scan codebase, select provider/model, generate .ralph/ config |
wiggum new <name> | AI interview → implementation-ready spec |
wiggum new <name> --issue #42 | Generate spec from GitHub issue context |
wiggum new <name> --auto --goals "..." | Headless spec generation (no interview) |
wiggum run <name> | Execute the Ralph loop with Claude Code |
wiggum run <name> --worktree | Execute in isolated git worktree |
wiggum run <name> --resume | Resume from last checkpoint |
wiggum run <name> --review-mode merge | Auto-merge when complete |
wiggum monitor <name> | Re-attach to a backgrounded loop |
wiggum agent | Autonomous backlog-to-PR pipeline |
wiggum agent --stream | Headless agent mode for CI |
wiggum sync | Refresh codebase context |
wiggum config set cli claude | Set Claude Code as default agent |
Getting started
npm install -g wiggum-cli wiggum init # scan your codebase, pick provider and model
wiggum new my-feature # generate a spec through AI interview
wiggum run my-feature # execute the Ralph loop with Claude Code
The CLI is free and open source. You bring your own Anthropic API key. See pricing for Pro plans with managed keys and a web dashboard, or check the GitHub repository for full documentation.
Founder of Wiggum CLI, an open-source AI agent for autonomous coding loops. Previously scaled a B2C SaaS to €1.5M ARR and 5,000+ paying subscribers.