/ Directory / Playground / volcano-agent-sdk
● Community Kong ⚡ Instant

volcano-agent-sdk

by Kong · Kong/volcano-agent-sdk

Build TypeScript AI agents that chain LLM reasoning with MCP tools — 100+ models, parallel execution, built-in OTel tracing.

Volcano Agent SDK by Kong is a TypeScript SDK (not an MCP server) for building multi-provider AI agents that consume MCP tools. Supports OpenAI, Anthropic, Mistral, Bedrock, Vertex, Azure. Autoselects tools from configured MCP endpoints, streams tokens, retries on failure, ships OpenTelemetry traces.

Why use it

Key features

Live Demo

What it looks like in practice

volcano-agent-sdk.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "volcano-agent-sdk": {
      "command": "npx",
      "args": [
        "-y",
        "volcano-agent-sdk"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "volcano-agent-sdk": {
      "command": "npx",
      "args": [
        "-y",
        "volcano-agent-sdk"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "volcano-agent-sdk": {
      "command": "npx",
      "args": [
        "-y",
        "volcano-agent-sdk"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "volcano-agent-sdk": {
      "command": "npx",
      "args": [
        "-y",
        "volcano-agent-sdk"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "volcano-agent-sdk",
      "command": "npx",
      "args": [
        "-y",
        "volcano-agent-sdk"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "volcano-agent-sdk": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "volcano-agent-sdk"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add volcano-agent-sdk -- npx -y volcano-agent-sdk

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use volcano-agent-sdk

Build a coding agent that uses GitHub + Sentry MCPs

👤 TS devs building internal automation ⏱ ~60 min advanced

When to use: You want a programmable agent, not a chat session.

Prerequisites
  • Node 20+ — Standard
  • MCP endpoints for the tools you want (github, sentry, etc.) — Either existing public or your own deploys
Flow
  1. Install + scaffold
    npm install @volcano.dev/agent and write a minimal agent that connects to github + sentry MCPs with Anthropic as the model.✓ Copied
    → Running TS project
  2. Write the task
    The agent's task: every 15 min, find new Sentry errors, correlate to GitHub commits via MCP, draft revert PRs for the obvious ones.✓ Copied
    → Agent executes task autonomously
  3. Instrument
    Enable OTel traces and pipe to Honeycomb/Grafana.✓ Copied
    → Spans visible

Outcome: A production-ready automation agent with observability.

Pitfalls
  • Agent hallucinates tool calls that don't exist — Restrict the MCP set passed to the agent; fewer, well-documented tools > more
  • Retries amplify transient upstream issues — Tune retry policy and add exponential backoff
Combine with: github · sentry

Compose a multi-agent crew for research tasks

👤 Developers exploring agent patterns ⏱ ~45 min advanced

When to use: A task benefits from specialization (researcher + writer + reviewer).

Flow
  1. Define agents
    Create agents: Researcher (web search MCP), Writer (drafts), Reviewer (fact-checks Researcher's sources).✓ Copied
    → Three typed agent instances
  2. Delegate
    Run the crew: topic 'state of MCP in 2026'. Have Researcher gather, Writer draft, Reviewer verify claims.✓ Copied
    → Coordinated output

Outcome: Higher-quality output than a single-pass agent on complex tasks.

Combine with: omnisearch

Build a streaming chatbot with tool access

👤 Product devs integrating AI into a TS app ⏱ ~40 min advanced

When to use: User-facing feature that needs real-time streaming + MCP tool calls.

Flow
  1. Wire streaming
    Build a chat endpoint that streams tokens to the client and calls tools mid-stream as the model requests.✓ Copied
    → Working streaming endpoint
  2. Explainability
    After each response, expose agent.summary() so the UI can show which tools were used.✓ Copied
    → Tool trail visible

Outcome: A production chat UI with transparent tool use.

Combinations

Pair with other MCPs for X10 leverage

volcano-agent-sdk + github + sentry

Autonomous triage agent across incident + code

Build an agent that, given a Sentry alert, fetches the stack, finds the offending commit via GitHub, and opens a PR with a minimal fix.✓ Copied
volcano-agent-sdk + vurb-ts

Vurb builds the server side; Volcano builds the agent side

Expose my business data via a Vurb MCP; build a Volcano agent that uses it to answer user questions.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
(SDK) You write TS; SDK picks tools from configured MCPs automatically N/A — Volcano Agent SDK is a library you build with, not an MCP server you call n/a — depends on model + tools

Cost & Limits

What this costs to run

API quota
LLM provider limits apply; MCP upstream limits apply
Tokens per call
Depends on model + conversation length
Monetary
SDK free; LLM usage billed by provider
Tip
Use cheaper models (Haiku/GPT-4o-mini) for routing and reserve expensive models for reasoning; Volcano supports per-step model choice.

Security

Permissions, secrets, blast radius

Credential storage: LLM provider keys + MCP credentials in env; SDK injects them
Data egress: LLM provider APIs + configured MCP endpoints
Never grant: LLM keys to untrusted code paths in the same process

Troubleshooting

Common errors and fixes

MCP connection fails on startup

Verify the MCP endpoint URL and auth. SDK logs full error when --debug is set.

Verify: Curl the MCP endpoint directly
Model refuses to use available tools

Tool descriptions may be poorly phrased; rewrite for clarity or force via agent config.

Verify: Inspect tools via agent.listTools()
High token cost on simple tasks

Check that system prompt isn't dragging MCP tool defs into every call; use lazy tool-load mode.

Verify: agent.summary() shows token breakdown

Alternatives

volcano-agent-sdk vs others

AlternativeWhen to use it insteadTradeoff
LangChain / LangGraph (TS)You want the largest ecosystem of integrationsHeavier abstraction; slower cold path
Vercel AI SDKYou want tight Next.js integrationLess focus on multi-agent patterns
Anthropic SDK rawYou only need Anthropic and minimal abstractionYou reimplement tool routing, retries, multi-provider

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills