/ Каталог / Песочница / mcp-use
● Сообщество mcp-use ⚡ Сразу

mcp-use

автор mcp-use · mcp-use/mcp-use

Python library that wires many MCP servers into one LangChain agent — or runs them headless without an LLM.

mcp-use is a client-side Python framework. Point it at N MCP server configs (stdio or HTTP), wrap them in an MCPAgent with any LangChain-compatible LLM, and you have a working multi-server agent. Also supports direct MCPClient tool calls without an LLM — useful for scripted automations.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

mcp-use.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mcp-use",
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mcp-use": {
      "command": {
        "path": "uvx",
        "args": [
          "mcp-use"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add mcp-use -- uvx mcp-use

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: mcp-use

Build a custom agent that uses playwright + filesystem + postgres

👤 Python devs building vertical agents ⏱ ~45 min intermediate

Когда использовать: You need a repeatable automation (not Claude Desktop) that chains browser + files + DB.

Предварительные требования
  • Python 3.10+, uv or pip — Standard setup
  • An LLM API key (OpenAI / Anthropic) — Set as env var your LangChain model expects
Поток
  1. Define the server configs
    Write a mcp-use config that connects to playwright (stdio via npx), postgres (stdio via uvx), and filesystem (local path scoped).✓ Скопировано
    → JSON/dict config matching the schema
  2. Wire the agent
    Create an MCPAgent using ChatAnthropic (claude-sonnet-4) and the config above. Max iterations = 15.✓ Скопировано
    → Agent instance ready to .run()
  3. Run a task
    Run: 'Crawl docs.example.com, save each page to ./knowledge/, then index titles into the postgres docs table.' Observe tool calls in logs.✓ Скопировано
    → Task completes, data lands where expected

Итог: A scriptable agent you can schedule, deploy, or embed — not tied to a desktop client.

Подводные камни
  • Agent loops between servers, burning tokens — Set strict max_iterations and use an LLM that follows instructions well — GPT-4o-mini often loops on complex chains, use a stronger model
  • stdio servers zombied after crash — Always use the async context manager pattern — it handles cleanup; don't manage the process yourself
Сочетать с: fastmcp · mcp-agent

Call MCP tools from Python without an LLM

👤 Engineers automating ops tasks ⏱ ~20 min intermediate

Когда использовать: You want to invoke an MCP tool as part of a larger Python pipeline, deterministically.

Поток
  1. Connect the client directly
    Use MCPClient to connect to my filesystem MCP. List available tools.✓ Скопировано
    → Tool names + schemas printed
  2. Call a tool with typed args
    Call write_file with path='./out.txt' and content='hello'. Confirm the return value.✓ Скопировано
    → File written, no LLM involved
  3. Chain into your business logic
    Wrap this in a function save_report(df) that calls the MCP tool — integrate into my existing Python ETL.✓ Скопировано
    → Reusable function

Итог: MCP-as-library: same servers used by Claude Desktop also callable from plain Python.

Подводные камни
  • Errors don't bubble up naturally — MCP errors come as result objects with isError: true — Check result.isError after every call; don't assume success

Build a router agent that picks the right MCP for each request

👤 Teams shipping agent products ⏱ ~60 min advanced

Когда использовать: Users send mixed requests (code, data, web) — one monolithic prompt with 50 tools degrades; you want routing.

Поток
  1. Define server groups
    Split my MCP servers into 3 groups: 'code' (git, github), 'data' (postgres, bigquery), 'web' (firecrawl, playwright).✓ Скопировано
    → 3 separate agent configs
  2. Add a router layer
    Write a classifier prompt that picks one group based on the user's intent. Use it to instantiate the matching MCPAgent on demand.✓ Скопировано
    → Classifier returns one of {code, data, web}
  3. Test with mixed traffic
    Run 10 varied requests through the router. Log which group handled each and whether the answer was correct.✓ Скопировано
    → Accuracy table + latency stats

Итог: A modular agent system where each request only sees the tools relevant to it — better accuracy, lower token cost.

Подводные камни
  • Edge cases where the request needs two groups (e.g., 'scrape + save to DB') — Define a fourth 'cross' group or fall back to the Orchestrator pattern via mcp-agent
Сочетать с: mcp-agent

Комбинации

Сочетайте с другими MCP — эффект x10

mcp-use + mcp-agent

Use mcp-use for connecting servers, mcp-agent for workflow patterns (orchestrator/evaluator)

Build an evaluator-optimizer loop where a writer agent uses mcp-use to access filesystem + git, and a critic agent reviews output using the same servers.✓ Скопировано
mcp-use + fastmcp

Write a server with FastMCP, then script against it with mcp-use — end-to-end Python agent stack

Server: expose our pricing API via FastMCP. Client: use mcp-use to call it in a pricing-simulation script.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
MCPClient(config) server config dict/path Entry point for any mcp-use script free
MCPAgent(llm, client, max_steps) LangChain chat model + MCPClient When you want LLM-driven tool selection LLM calls only
client.list_tools() server_name? Introspect what's available before calling free
client.call_tool(name, args) tool_name, dict Direct deterministic invocation — no LLM depends on tool
MCPServer decorator API @server.tool() on functions Less common; FastMCP is usually cleaner for server-building free

Стоимость и лимиты

Во что обходится

Квота API
None from mcp-use itself; depends on LLM and downstream MCPs
Токенов на вызов
LLM-driven calls burn tokens — the usual LangChain agent cost model
Деньги
Library is free, LLM usage is not
Совет
For deterministic flows, use client.call_tool directly — skip the LLM. Reserve MCPAgent for genuinely ambiguous tasks.

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: Per underlying MCP — mcp-use doesn't add a layer
Исходящий трафик: LLM provider + every connected MCP

Устранение неполадок

Частые ошибки и исправления

ConnectionError when starting stdio server

The command in your config isn't on PATH or the package isn't installed. Test manually: run the same npx -y ... in a terminal first.

Проверить: which npx && npx -y @modelcontextprotocol/server-filesystem --help
Agent calls tools correctly but answer is wrong

Usually an LLM issue — try a stronger model. GPT-4o-mini and open-source 7B models often misinterpret tool results.

Event loop already running error

You're calling a sync API from within an async context. Use await and the async client methods throughout.

Tools from server A shadow names in server B

Prefix tool names per server in your config, or rely on the library's built-in namespace handling (set namespace=True).

Альтернативы

mcp-use в сравнении

АльтернативаКогда использоватьКомпромисс
mcp-agentYou want workflow patterns (orchestrator, router, evaluator) baked inMore opinionated; less flexible if you want raw LangChain
Official Python MCP SDKYou want the lowest-level client — no LangChain, no abstractionsMore plumbing code
LangGraph + MCPYou need stateful multi-turn graphs with checkpointsSteeper learning curve; overkill for simple agents

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills