An open-source autonomous coding agent that plugs into any codebase. Scan your stack, generate specs through AI interviews, and ship via Ralph loops — from backlog to merged PR.
Demo of Wiggum CLI: running wiggum init to scan a codebase, wiggum new to generate a spec through an AI interview, and wiggum run to execute an autonomous Ralph loop that plans, implements, tests, verifies, and creates a pull request.
Wiggum CLI is an open-source command-line tool that plugs into any codebase and runs autonomous coding loops — also known as Ralph loops. It scans your tech stack, generates detailed feature specifications through AI-powered interviews, and executes complete plan-implement-test-review-merge cycles using Claude Code, Codex, or any CLI-based coding agent. Free and self-hosted — install with npm i -g wiggum-cli.
Wiggum reads your GitHub issues, generates specs, runs Ralph loops, reviews the output, and auto-merges — then moves to the next issue.
wiggum agent Point Wiggum at your GitHub backlog and walk away. It prioritizes issues by label and dependency order, assesses feature state, generates specs, runs the full Ralph loop, reviews diffs against the spec, and auto-merges when all checks pass.
wiggum agent --headlessmain Scan your codebase, generate a spec through an AI interview, and execute the Ralph loop — one feature at a time.
/init Point Wiggum at any project and it maps your tech stack, structure, and conventions. Zero config — it generates the context that makes every spec and loop reliable.
/new Describe what you want in plain English. Wiggum uses your full codebase context to interview you, then generates a detailed spec any CLI-based coding agent can execute — consistent results, every time.
/run Hand the spec to your coding agent and walk away. The Ralph loop technique runs autonomous iterations with progress checkpoints, delivering working code without constant intervention.
Ctrl+C to stop the loop
Built for real codebases. No wrappers, no lock-in, no config files.
/issue to browse your backlog in a navigable table picker. Select an issue and Wiggum takes you straight into the AI interview with full issue context pre-loaded. Or pass --issue #42 directly.Everything you need to know about Wiggum CLI.
Wiggum is an open-source CLI tool that plugs into any codebase and runs autonomous coding loops. It scans your tech stack, generates detailed feature specifications through AI-powered interviews, and executes implementation loops using Claude Code, Codex, or any CLI-based coding agent.
Wiggum works in three steps: (1) /init scans your project to map frameworks, languages, and conventions — zero config required. (2) /new interviews you about what you want to build, using your codebase context to generate a detailed, implementation-ready spec. (3) /run hands the spec to your coding agent and runs autonomous iteration loops with progress checkpoints until the feature is complete.
Wiggum works with any CLI-based coding agent. It has been tested with Claude Code (Anthropic), OpenAI Codex CLI, and other terminal-based AI assistants. The specs it generates are agent-agnostic — they work with any tool that can read a markdown file and execute code.
Yes. The CLI is open source and free forever. You bring your own API keys (e.g. Anthropic, OpenAI) and run everything locally. Pro plans ($19/mo+) add managed API keys, a web dashboard, push notifications, and priority support.
The Ralph loop is an autonomous coding methodology pioneered by Geoffrey Huntley. It breaks feature development into phases — planning, implementation, E2E testing, verification, and PR review — running each as an autonomous agent loop with checkpoints. Wiggum implements this technique as a CLI tool.
A Ralph loop in Claude Code is an autonomous coding cycle where Claude Code executes structured phases — plan, implement, test, verify, and PR — without manual intervention. Wiggum CLI automates this by generating specs and managing the loop lifecycle.
Yes. "Ralph loop," "Ralph Wiggum loop," and "Ralph Wiggum technique" all refer to the same autonomous coding methodology pioneered by Geoffrey Huntley. The technique breaks feature development into structured agent phases with checkpoints. Learn more.
Yes. The Ralph loop technique is agent-agnostic. Wiggum CLI generates specs in markdown that work with Claude Code, OpenAI Codex, Gemini CLI, OpenCode, or any CLI-based coding agent.
Yes. Wiggum's /init command auto-detects your tech stack regardless of language or framework. It has been used with TypeScript, Python, Go, Rust, React, Next.js, Astro, Django, Rails, and many others. The generated specs adapt to your project's patterns and conventions.
Agent mode (wiggum agent) reads your GitHub backlog, picks issues by priority label (P0 > P1 > P2) and dependency order, generates implementation specs from issue context, runs the full Ralph loop per issue, reviews diffs against the spec, and auto-merges the PR when all checks pass — then moves to the next issue. It can also run headless in CI pipelines.
Run /issue in Wiggum to browse your GitHub issues in a navigable table picker. Select an issue and Wiggum takes you straight into the AI interview with full issue context (title, body, labels) pre-loaded. You can also pass --issue #42 directly to wiggum new. In agent mode, issues are read from your backlog automatically.
Copilot and Cursor assist with in-editor code completion. Wiggum operates at the feature level — it generates full specifications, then runs autonomous multi-step coding loops that implement, test, and create PRs without manual intervention. It complements editor tools rather than replacing them.
Absolutely. Wiggum is designed to plug into existing codebases. The /init command scans your project structure, detects conventions, and generates context that ensures specs and loops respect your existing patterns. No migration or restructuring needed.
The full public roadmap is available at wiggum.app/roadmap. It shows what's currently being built, what's coming next, and milestone progress toward each release. All roadmap items link back to their GitHub issues for full transparency.
Web dashboard, push notifications, and managed API keys are planned but not yet shipped. They are tracked on the public roadmap. The free CLI — including stack scanning, spec generation, and autonomous loops — is fully functional today. You can join the waitlist on the pricing page to be notified when Pro features launch.