/ Directory / Playground / mcp-use
● Community mcp-use ⚡ Instant

mcp-use

by mcp-use · mcp-use/mcp-use

Python library that wires many MCP servers into one LangChain agent — or runs them headless without an LLM.

mcp-use is a client-side Python framework. Point it at N MCP server configs (stdio or HTTP), wrap them in an MCPAgent with any LangChain-compatible LLM, and you have a working multi-server agent. Also supports direct MCPClient tool calls without an LLM — useful for scripted automations.

Why use it

Key features

Live Demo

What it looks like in practice

mcp-use.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mcp-use",
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mcp-use": {
      "command": {
        "path": "uvx",
        "args": [
          "mcp-use"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add mcp-use -- uvx mcp-use

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use mcp-use

Build a custom agent that uses playwright + filesystem + postgres

👤 Python devs building vertical agents ⏱ ~45 min intermediate

When to use: You need a repeatable automation (not Claude Desktop) that chains browser + files + DB.

Prerequisites
  • Python 3.10+, uv or pip — Standard setup
  • An LLM API key (OpenAI / Anthropic) — Set as env var your LangChain model expects
Flow
  1. Define the server configs
    Write a mcp-use config that connects to playwright (stdio via npx), postgres (stdio via uvx), and filesystem (local path scoped).✓ Copied
    → JSON/dict config matching the schema
  2. Wire the agent
    Create an MCPAgent using ChatAnthropic (claude-sonnet-4) and the config above. Max iterations = 15.✓ Copied
    → Agent instance ready to .run()
  3. Run a task
    Run: 'Crawl docs.example.com, save each page to ./knowledge/, then index titles into the postgres docs table.' Observe tool calls in logs.✓ Copied
    → Task completes, data lands where expected

Outcome: A scriptable agent you can schedule, deploy, or embed — not tied to a desktop client.

Pitfalls
  • Agent loops between servers, burning tokens — Set strict max_iterations and use an LLM that follows instructions well — GPT-4o-mini often loops on complex chains, use a stronger model
  • stdio servers zombied after crash — Always use the async context manager pattern — it handles cleanup; don't manage the process yourself
Combine with: fastmcp · mcp-agent

Call MCP tools from Python without an LLM

👤 Engineers automating ops tasks ⏱ ~20 min intermediate

When to use: You want to invoke an MCP tool as part of a larger Python pipeline, deterministically.

Flow
  1. Connect the client directly
    Use MCPClient to connect to my filesystem MCP. List available tools.✓ Copied
    → Tool names + schemas printed
  2. Call a tool with typed args
    Call write_file with path='./out.txt' and content='hello'. Confirm the return value.✓ Copied
    → File written, no LLM involved
  3. Chain into your business logic
    Wrap this in a function save_report(df) that calls the MCP tool — integrate into my existing Python ETL.✓ Copied
    → Reusable function

Outcome: MCP-as-library: same servers used by Claude Desktop also callable from plain Python.

Pitfalls
  • Errors don't bubble up naturally — MCP errors come as result objects with isError: true — Check result.isError after every call; don't assume success

Build a router agent that picks the right MCP for each request

👤 Teams shipping agent products ⏱ ~60 min advanced

When to use: Users send mixed requests (code, data, web) — one monolithic prompt with 50 tools degrades; you want routing.

Flow
  1. Define server groups
    Split my MCP servers into 3 groups: 'code' (git, github), 'data' (postgres, bigquery), 'web' (firecrawl, playwright).✓ Copied
    → 3 separate agent configs
  2. Add a router layer
    Write a classifier prompt that picks one group based on the user's intent. Use it to instantiate the matching MCPAgent on demand.✓ Copied
    → Classifier returns one of {code, data, web}
  3. Test with mixed traffic
    Run 10 varied requests through the router. Log which group handled each and whether the answer was correct.✓ Copied
    → Accuracy table + latency stats

Outcome: A modular agent system where each request only sees the tools relevant to it — better accuracy, lower token cost.

Pitfalls
  • Edge cases where the request needs two groups (e.g., 'scrape + save to DB') — Define a fourth 'cross' group or fall back to the Orchestrator pattern via mcp-agent
Combine with: mcp-agent

Combinations

Pair with other MCPs for X10 leverage

mcp-use + mcp-agent

Use mcp-use for connecting servers, mcp-agent for workflow patterns (orchestrator/evaluator)

Build an evaluator-optimizer loop where a writer agent uses mcp-use to access filesystem + git, and a critic agent reviews output using the same servers.✓ Copied
mcp-use + fastmcp

Write a server with FastMCP, then script against it with mcp-use — end-to-end Python agent stack

Server: expose our pricing API via FastMCP. Client: use mcp-use to call it in a pricing-simulation script.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
MCPClient(config) server config dict/path Entry point for any mcp-use script free
MCPAgent(llm, client, max_steps) LangChain chat model + MCPClient When you want LLM-driven tool selection LLM calls only
client.list_tools() server_name? Introspect what's available before calling free
client.call_tool(name, args) tool_name, dict Direct deterministic invocation — no LLM depends on tool
MCPServer decorator API @server.tool() on functions Less common; FastMCP is usually cleaner for server-building free

Cost & Limits

What this costs to run

API quota
None from mcp-use itself; depends on LLM and downstream MCPs
Tokens per call
LLM-driven calls burn tokens — the usual LangChain agent cost model
Monetary
Library is free, LLM usage is not
Tip
For deterministic flows, use client.call_tool directly — skip the LLM. Reserve MCPAgent for genuinely ambiguous tasks.

Security

Permissions, secrets, blast radius

Credential storage: Per underlying MCP — mcp-use doesn't add a layer
Data egress: LLM provider + every connected MCP

Troubleshooting

Common errors and fixes

ConnectionError when starting stdio server

The command in your config isn't on PATH or the package isn't installed. Test manually: run the same npx -y ... in a terminal first.

Verify: which npx && npx -y @modelcontextprotocol/server-filesystem --help
Agent calls tools correctly but answer is wrong

Usually an LLM issue — try a stronger model. GPT-4o-mini and open-source 7B models often misinterpret tool results.

Event loop already running error

You're calling a sync API from within an async context. Use await and the async client methods throughout.

Tools from server A shadow names in server B

Prefix tool names per server in your config, or rely on the library's built-in namespace handling (set namespace=True).

Alternatives

mcp-use vs others

AlternativeWhen to use it insteadTradeoff
mcp-agentYou want workflow patterns (orchestrator, router, evaluator) baked inMore opinionated; less flexible if you want raw LangChain
Official Python MCP SDKYou want the lowest-level client — no LangChain, no abstractionsMore plumbing code
LangGraph + MCPYou need stateful multi-turn graphs with checkpointsSteeper learning curve; overkill for simple agents

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills