1

    subagent-orchestration

    by Rapa Canola

    Intelligently delegate tasks to Claude, Codex, or Gemini based on cost, model strengths, and rate limits.

    Updated May 2026
    Security scanned
    One-time purchase

    $15

    One-time purchase

    ⚡ Also available via Agensi MCP — your AI agent can load this skill on demand via MCP. Learn more →

    Included in download

    • Route tasks to cheaper models to save on token spend
    • Cross-check security-critical code across three different LLM providers
    • terminal automation included
    • Includes example output and usage patterns
    • Instant install

    See it in action

    [Claude]: Logic looks sound, but check the overflow on line 42.
    [Codex]: LGTM. I'd suggest a try-catch block here.
    [Gemini]: Optimization tip: use a map instead of a nested loop for O(n) complexity.
    Analysis: 2/3 models suggest error handling improvements.

    About This Skill

    Smart Multi-Model Routing & Delegation

    Subagent Orchestration is a high-level routing skill designed for developers who use multiple AI models (Claude, GPT/Codex, and Gemini) and want to optimize for cost, performance, and context window limitations. Instead of manually switching browser tabs or CLI tools, this skill allows your primary agent to intelligently delegate sub-tasks to the best-fitting model for the job.

    How it works

    The skill acts as an intelligent traffic controller. It analyzes your request and routes it to specific vendor CLIs based on task strengths: Claude for precise code refactoring, Gemini for massive context ingestion (up to 1M+ tokens), or Codex for web-integrated research. It handles the syntax differences between each model's CLI, enabling seamless cross-model execution.

    Why use this skill?

    • Cost Management: Automatically route lightweight formatting or summary tasks to cheaper models like Gemini Flash.
    • Rate Limit Resilience: If Claude is rate-limited, the skill can automatically failover to Gemini or Codex to keep your workflow moving.
    • Cross-Model Verification: Run the same security or logic check across all three models simultaneously to find "blind spots" through a side-by-side comparison table.
    • Context Optimization: Automatically detects when a prompt exceeds standard context limits and routes it to the large-window models.

    Supported Integration

    The skill leverages your existing local environment, supporting official CLIs for Claude, Codex/ChatGPT, and Gemini. It outputs clean, prefixed text for single runs or structured Markdown tables for cross-model comparisons.

    📖 Learn more: Best DevOps & Deployment Skills for Claude Code →

    Use Cases

    • Route tasks to cheaper models to save on token spend
    • Cross-check security-critical code across three different LLM providers
    • Automatically failover to a different model when hitting rate limits
    • Process massive files by routing to Gemini's 1M+ context window

    Reviews

    No reviews yet - be the first to share your experience.

    Only users who have downloaded or purchased this skill can leave a review.

    Security Scanned

    Passed automated security review

    Permissions

    Terminal / Shell

    Allowed Hosts

    0x67108864.github.io
    claude.com
    developers.openai.com
    geminicli.com

    File Scopes

    subagent-orchestration/**

    Frequently Asked Questions

    Similar Skills

    $15

    One-time