/ Verzeichnis / Playground / MCP-Bridge
● Community SecretiveShell ⚡ Sofort

MCP-Bridge

von SecretiveShell · SecretiveShell/MCP-Bridge

Use MCP tools from any OpenAI-compatible client — LibreChat, Open WebUI, your custom app — without native MCP support. Middleware that translates.

MCP-Bridge sits between your OpenAI-compatible client and inference backend. It advertises MCP server tools as OpenAI function-calling tools, dispatches calls, and returns results to complete the loop. Useful when your favorite chat UI doesn't speak MCP but speaks OpenAI.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

bridge.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "bridge",
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "bridge": {
      "command": {
        "path": "uvx",
        "args": [
          "MCP-Bridge"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add bridge -- uvx MCP-Bridge

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: MCP-Bridge

Add MCP tools to LibreChat / any OpenAI-compatible chat UI

👤 Self-hosters of OSS chat frontends ⏱ ~30 min intermediate

Wann einsetzen: You're running LibreChat, Big-AGI, or a custom app that calls /v1/chat/completions and wants tool use, but it doesn't speak MCP.

Voraussetzungen
  • An OpenAI-compatible inference backend — OpenAI, Anthropic-via-proxy, vLLM, Ollama, etc.
  • At least one MCP server you want to expose — filesystem, fetch, postgres — whatever you've got
Ablauf
  1. Write config.json
    Write me an MCP-Bridge config.json that proxies OpenAI and exposes filesystem MCP (rooted at /data) and fetch MCP.✓ Kopiert
    → Valid config with inference_server and mcp_servers sections
  2. Run via Docker
    Give me the docker run command to start MCP-Bridge using this config on port 8000.✓ Kopiert
    → Working docker command with volume mounts
  3. Point the chat UI at the bridge
    Show me what API base URL to set in LibreChat to use the bridge instead of OpenAI directly.✓ Kopiert
    → Config pointing to http://localhost:8000/v1

Ergebnis: LibreChat conversations can now call filesystem and fetch tools, transparently.

Fallstricke
  • Not all OpenAI-compatible clients support tool calls — Verify your UI supports functions in responses before wiring; check its docs for 'tool calling' support
  • Streaming responses not yet implemented — Disable streaming in the client; use non-streaming endpoints
Kombinieren mit: filesystem · fetch

Give your own Python/JS agent framework MCP tool access

👤 Devs building custom agents on OpenAI SDK ⏱ ~25 min intermediate

Wann einsetzen: You're building with the raw OpenAI SDK (or LangChain's OpenAI client) and want to plug in the MCP ecosystem without rewriting the agent.

Ablauf
  1. Start MCP-Bridge locally
    Run MCP-Bridge with upstream set to OpenAI and these MCP servers: [list].✓ Kopiert
    → Bridge listening on :8000
  2. Point OpenAI client base_url at the bridge
    Show me Python SDK init: client = OpenAI(base_url='http://localhost:8000/v1', api_key=...). Then call chat completions.✓ Kopiert
    → Code snippet that works unchanged

Ergebnis: Zero-touch tool access for your existing agent code.

Fallstricke
  • Bridge is a single point of failure — For prod, run with supervisord/systemd and healthcheck endpoint

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

bridge + filesystem + fetch

Budget self-hosted ChatGPT replacement with real tool use

Expose filesystem (rooted at ~/Notes) and fetch via MCP-Bridge, then use LibreChat to browse + summarize.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
POST /v1/chat/completions OpenAI-compatible messages + tools omitted (auto-injected) Main entrypoint — drop-in for OpenAI 1 LLM call + N tool calls
GET /tools Discover what's available free
SSE /bridge Attach an external MCP client to the bridge over SSE free

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Pass-through — whatever your upstream inference provider charges
Tokens pro Aufruf
Bridge adds ~100-500 tokens of tool definitions per request
Kosten in €
Free (MIT). You pay for your LLM + wherever you host it.
Tipp
Only attach MCP servers you need — every attached tool bloats the system prompt.

Sicherheit

Rechte, Secrets, Reichweite

Credential-Speicherung: Upstream API key + MCP server creds in config.json; lock down file permissions
Datenabfluss: Requests go to your configured upstream (e.g. OpenAI) + whichever MCP servers
Niemals gewähren: Expose the bridge to the internet without enabling bearer auth

Fehlerbehebung

Häufige Fehler und Lösungen

Client says 'tool_use not supported'

Upstream model or client UI doesn't support function calling. Use a model that does (gpt-4o, claude, llama 3.1+).

MCP server connection refused

Check the command in config.json actually runs. Bridge runs it as subprocess; test manually: npx -y the-mcp.

401 from bridge when auth enabled

Set Authorization: Bearer <key> header; the key must be in config under security.auth.keys.

Alternativen

MCP-Bridge vs. andere

AlternativeWann stattdessenKompromiss
Open WebUI native MCPYou specifically use Open WebUI 0.6.31+Built-in — no bridge needed, but Open WebUI only
LiteLLM with custom callbacksYou want multi-provider routing + tool injectionMore complex; LiteLLM doesn't natively speak MCP either
mcpoYou want to expose MCP tools as plain OpenAPI for non-LLM clients tooDifferent shape — OpenAPI-first rather than chat-completions-first

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen