Orca - IDE for coding agents
Stably's next-gen IDE that's built around running coding agents in parallel. First-class support for Claude Code, Codex, Cursor agent, OpenCode, Ghostty, and worktree-based orchestration.
Most "AI IDEs" are an editor with an LLM tab bolted on the side. Orca is the inverse: an IDE designed from the ground up around running coding agents in parallel, with the editor functions arranged to serve that workflow. Stably ships it for macOS, Windows, and Linux, and the install model is "bring your own subscription" - no Orca-side login, no Orca-side key.
The thing it gets right that side-tab IDEs don't: every feature is a worktree. Spinning up a new agent task creates a new worktree automatically; switching between tasks is switching worktrees. No stashing, no branch juggling, no "who edited what."
Supported agents
Orca supports any CLI agent. The first-class list (with packaged integrations) is unusually long:
Claude Code, Codex, Gemini, Pi, Hermes Agent, OpenCode, Goose, Amp, Auggie, Charm, Cline, Codebuff, Continue, Cursor, Droid (Factory), GitHub Copilot, Kilocode, Kimi, Kiro, Mistral Vibe, Qwen Code, Rovo Dev. If your agent isn't on the list and exposes a CLI, you can still wire it up.
Install
There's no npm or cargo install for the IDE itself - it's a desktop app:
- Download from
onOrca.dev - Or grab a binary from the GitHub Releases page
The Orca CLI - which lets agents drive the IDE itself (add projects, spin up worktrees, update progress comments) - installs via the skills system:
npx skills add https://github.com/stablyai/orca --skill orca-cli
It's bundled with the Orca IDE under Settings.
The features that change how you work
- Multi-agent terminals - run several agents side by side in tabs and panes; a status indicator shows which are active.
- Built-in source control - review AI-generated diffs, make quick edits, commit, all without leaving Orca.
- GitHub integration - PRs, issues, and Actions checks linked to each worktree automatically.
- SSH support - connect to remote machines and run agents on them directly from the IDE.
- Notifications - mark threads unread to come back to later.
The piece worth highlighting: Annotate AI Diff. You can comment directly on lines of an AI-generated diff and send the annotated review back to the agent for revision. No copying line numbers, no context-switching to a chat window. The review loop stays inside the diff viewer.
Hot-swap Codex accounts
This one is for the people running multiple Codex subscriptions to chase the best token deal. Orca lets you switch accounts in one click - no re-login, no editing config files. Pick an account, keep building. If that workflow describes you, this is the kind of feature that pays back the install cost in a single afternoon.
Per-worktree browser and Design Mode
Each worktree gets its own embedded browser. Preview your app as you build, then flip into Design Mode and click any UI element - it lands in the chat as context for the agent. The agent gets a stable selector or component reference; you skip the screenshot-and-explain cycle.
This is the right primitive for frontend work specifically. Most "AI in the browser" tools are bookmarklets or extensions; building it into the worktree means it's scoped correctly when you have five tasks open at once.
When to reach for it
- You've outgrown a single agent inside a single editor and the parallel-worktree dance has become its own task.
- You manage multiple accounts (Codex especially) and the hot-swap is genuinely useful.
- You do a lot of frontend work where the click-an-element-into-chat pattern saves real time.
When not to
- You're already deeply invested in another IDE (Neovim, Emacs, JetBrains) and won't migrate. Orca is the IDE; it's not a plugin.
- You want a fully open-source stack. The IDE is closed-source; the CLI piece is on GitHub.
- Headless / server workflows. Orca is a desktop application - the right tool for that job is something like Agent of Empires running over SSH.
What's not in the README
There's no documented self-hosted or air-gapped mode. The IDE is a download; the agent CLIs you bring are the ones that matter for data flow. If you have an enterprise setup that needs a gateway in front of all model traffic, that's a separate problem (ThinkWatch is a clean fit) - Orca won't enforce it for you, but it also won't fight it, since "bring your own CLI" means the CLI's config is the only place provider URLs live.
Featured in
Claude Code tools, plugins, and integrations
The best tools, MCP servers, and harnesses for getting more out of Claude Code - orchestration, observability, telemetry, and remote control.
Multi-agent frameworks and orchestration
Frameworks, harnesses, and DSLs for coordinating multiple AI agents across handoffs, parallel waves, and tool use.
Tools for OpenAI Codex CLI
The Codex-aware slice of the directory: orchestration, observability, sandboxes, and bridges built specifically for the OpenAI Codex runtime.
Related entries
wanman - worktree-isolated multi-agent runtime for Claude Code and Codex
Multi-agent runtime that spawns each Claude Code or Codex agent in its own git worktree and home directory. JSON-RPC subprocess control, task pooling, artifact storage. Solves the share-a-directory failure mode that breaks most multi-agent harnesses.
agent-hub - one chat surface for every local and remote agent
Open-source hub that connects to Claude Code, Codex, Hermes, OpenClaw, and other agent runtimes - local or on remote machines - through a single chat UI. Less workflow-tied than Conductor.
codex-mcp-server - call Codex from Claude Code
MCP wrapper around the OpenAI Codex CLI that lets Claude Code (or Cursor) hand specific tasks to Codex as a sub-agent over MCP.
maestro-orchestrate - multi-agent orchestration platform
Orchestrates Gemini CLI, Claude Code, Codex, and Qwen Code with 39 specialist subagents, parallel execution, and built-in review/debug/security/SEO/accessibility passes.