/ Annuaire / Playground / mcp-use
● Communauté mcp-use ⚡ Instantané

mcp-use

par mcp-use · mcp-use/mcp-use

Python library that wires many MCP servers into one LangChain agent — or runs them headless without an LLM.

mcp-use is a client-side Python framework. Point it at N MCP server configs (stdio or HTTP), wrap them in an MCPAgent with any LangChain-compatible LLM, and you have a working multi-server agent. Also supports direct MCPClient tool calls without an LLM — useful for scripted automations.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

mcp-use.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mcp-use": {
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mcp-use",
      "command": "uvx",
      "args": [
        "mcp-use"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mcp-use": {
      "command": {
        "path": "uvx",
        "args": [
          "mcp-use"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add mcp-use -- uvx mcp-use

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : mcp-use

Build a custom agent that uses playwright + filesystem + postgres

👤 Python devs building vertical agents ⏱ ~45 min intermediate

Quand l'utiliser : You need a repeatable automation (not Claude Desktop) that chains browser + files + DB.

Prérequis
  • Python 3.10+, uv or pip — Standard setup
  • An LLM API key (OpenAI / Anthropic) — Set as env var your LangChain model expects
Déroulement
  1. Define the server configs
    Write a mcp-use config that connects to playwright (stdio via npx), postgres (stdio via uvx), and filesystem (local path scoped).✓ Copié
    → JSON/dict config matching the schema
  2. Wire the agent
    Create an MCPAgent using ChatAnthropic (claude-sonnet-4) and the config above. Max iterations = 15.✓ Copié
    → Agent instance ready to .run()
  3. Run a task
    Run: 'Crawl docs.example.com, save each page to ./knowledge/, then index titles into the postgres docs table.' Observe tool calls in logs.✓ Copié
    → Task completes, data lands where expected

Résultat : A scriptable agent you can schedule, deploy, or embed — not tied to a desktop client.

Pièges
  • Agent loops between servers, burning tokens — Set strict max_iterations and use an LLM that follows instructions well — GPT-4o-mini often loops on complex chains, use a stronger model
  • stdio servers zombied after crash — Always use the async context manager pattern — it handles cleanup; don't manage the process yourself
Combiner avec : fastmcp · mcp-agent

Call MCP tools from Python without an LLM

👤 Engineers automating ops tasks ⏱ ~20 min intermediate

Quand l'utiliser : You want to invoke an MCP tool as part of a larger Python pipeline, deterministically.

Déroulement
  1. Connect the client directly
    Use MCPClient to connect to my filesystem MCP. List available tools.✓ Copié
    → Tool names + schemas printed
  2. Call a tool with typed args
    Call write_file with path='./out.txt' and content='hello'. Confirm the return value.✓ Copié
    → File written, no LLM involved
  3. Chain into your business logic
    Wrap this in a function save_report(df) that calls the MCP tool — integrate into my existing Python ETL.✓ Copié
    → Reusable function

Résultat : MCP-as-library: same servers used by Claude Desktop also callable from plain Python.

Pièges
  • Errors don't bubble up naturally — MCP errors come as result objects with isError: true — Check result.isError after every call; don't assume success

Build a router agent that picks the right MCP for each request

👤 Teams shipping agent products ⏱ ~60 min advanced

Quand l'utiliser : Users send mixed requests (code, data, web) — one monolithic prompt with 50 tools degrades; you want routing.

Déroulement
  1. Define server groups
    Split my MCP servers into 3 groups: 'code' (git, github), 'data' (postgres, bigquery), 'web' (firecrawl, playwright).✓ Copié
    → 3 separate agent configs
  2. Add a router layer
    Write a classifier prompt that picks one group based on the user's intent. Use it to instantiate the matching MCPAgent on demand.✓ Copié
    → Classifier returns one of {code, data, web}
  3. Test with mixed traffic
    Run 10 varied requests through the router. Log which group handled each and whether the answer was correct.✓ Copié
    → Accuracy table + latency stats

Résultat : A modular agent system where each request only sees the tools relevant to it — better accuracy, lower token cost.

Pièges
  • Edge cases where the request needs two groups (e.g., 'scrape + save to DB') — Define a fourth 'cross' group or fall back to the Orchestrator pattern via mcp-agent
Combiner avec : mcp-agent

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

mcp-use + mcp-agent

Use mcp-use for connecting servers, mcp-agent for workflow patterns (orchestrator/evaluator)

Build an evaluator-optimizer loop where a writer agent uses mcp-use to access filesystem + git, and a critic agent reviews output using the same servers.✓ Copié
mcp-use + fastmcp

Write a server with FastMCP, then script against it with mcp-use — end-to-end Python agent stack

Server: expose our pricing API via FastMCP. Client: use mcp-use to call it in a pricing-simulation script.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
MCPClient(config) server config dict/path Entry point for any mcp-use script free
MCPAgent(llm, client, max_steps) LangChain chat model + MCPClient When you want LLM-driven tool selection LLM calls only
client.list_tools() server_name? Introspect what's available before calling free
client.call_tool(name, args) tool_name, dict Direct deterministic invocation — no LLM depends on tool
MCPServer decorator API @server.tool() on functions Less common; FastMCP is usually cleaner for server-building free

Coût et limites

Coût d'exécution

Quota d'API
None from mcp-use itself; depends on LLM and downstream MCPs
Tokens par appel
LLM-driven calls burn tokens — the usual LangChain agent cost model
Monétaire
Library is free, LLM usage is not
Astuce
For deterministic flows, use client.call_tool directly — skip the LLM. Reserve MCPAgent for genuinely ambiguous tasks.

Sécurité

Permissions, secrets, portée

Stockage des identifiants : Per underlying MCP — mcp-use doesn't add a layer
Sortie de données : LLM provider + every connected MCP

Dépannage

Erreurs courantes et correctifs

ConnectionError when starting stdio server

The command in your config isn't on PATH or the package isn't installed. Test manually: run the same npx -y ... in a terminal first.

Vérifier : which npx && npx -y @modelcontextprotocol/server-filesystem --help
Agent calls tools correctly but answer is wrong

Usually an LLM issue — try a stronger model. GPT-4o-mini and open-source 7B models often misinterpret tool results.

Event loop already running error

You're calling a sync API from within an async context. Use await and the async client methods throughout.

Tools from server A shadow names in server B

Prefix tool names per server in your config, or rely on the library's built-in namespace handling (set namespace=True).

Alternatives

mcp-use vs autres

AlternativeQuand l'utiliserCompromis
mcp-agentYou want workflow patterns (orchestrator, router, evaluator) baked inMore opinionated; less flexible if you want raw LangChain
Official Python MCP SDKYou want the lowest-level client — no LangChain, no abstractionsMore plumbing code
LangGraph + MCPYou need stateful multi-turn graphs with checkpointsSteeper learning curve; overkill for simple agents

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills