/ 디렉터리 / 플레이그라운드 / MCP-Bridge
● 커뮤니티 SecretiveShell ⚡ 바로 사용

MCP-Bridge

제작: SecretiveShell · SecretiveShell/MCP-Bridge

Use MCP tools from any OpenAI-compatible client — LibreChat, Open WebUI, your custom app — without native MCP support. Middleware that translates.

MCP-Bridge sits between your OpenAI-compatible client and inference backend. It advertises MCP server tools as OpenAI function-calling tools, dispatches calls, and returns results to complete the loop. Useful when your favorite chat UI doesn't speak MCP but speaks OpenAI.

왜 쓰나요

핵심 기능

라이브 데모

실제 사용 모습

bridge.replay ▶ 준비됨
0/0

설치

클라이언트 선택

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config 열기. 저장 후 앱 재시작.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Cursor는 Claude Desktop과 동일한 mcpServers 스키마 사용. 프로젝트 설정이 전역보다 우선.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Cline 사이드바의 MCP Servers 아이콘 클릭 후 "Edit Configuration" 선택.

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "bridge": {
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop과 같은 형식. Windsurf 재시작 후 적용.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "bridge",
      "command": "uvx",
      "args": [
        "MCP-Bridge"
      ]
    }
  ]
}

Continue는 맵이 아닌 서버 오브젝트 배열 사용.

~/.config/zed/settings.json
{
  "context_servers": {
    "bridge": {
      "command": {
        "path": "uvx",
        "args": [
          "MCP-Bridge"
        ]
      }
    }
  }
}

context_servers에 추가. 저장 시 Zed가 핫 리로드.

claude mcp add bridge -- uvx MCP-Bridge

한 줄 명령. claude mcp list로 확인, claude mcp remove로 제거.

사용 사례

실전 활용법: MCP-Bridge

Add MCP tools to LibreChat / any OpenAI-compatible chat UI

👤 Self-hosters of OSS chat frontends ⏱ ~30 min intermediate

언제 쓸까: You're running LibreChat, Big-AGI, or a custom app that calls /v1/chat/completions and wants tool use, but it doesn't speak MCP.

사전 조건
  • An OpenAI-compatible inference backend — OpenAI, Anthropic-via-proxy, vLLM, Ollama, etc.
  • At least one MCP server you want to expose — filesystem, fetch, postgres — whatever you've got
흐름
  1. Write config.json
    Write me an MCP-Bridge config.json that proxies OpenAI and exposes filesystem MCP (rooted at /data) and fetch MCP.✓ 복사됨
    → Valid config with inference_server and mcp_servers sections
  2. Run via Docker
    Give me the docker run command to start MCP-Bridge using this config on port 8000.✓ 복사됨
    → Working docker command with volume mounts
  3. Point the chat UI at the bridge
    Show me what API base URL to set in LibreChat to use the bridge instead of OpenAI directly.✓ 복사됨
    → Config pointing to http://localhost:8000/v1

결과: LibreChat conversations can now call filesystem and fetch tools, transparently.

함정
  • Not all OpenAI-compatible clients support tool calls — Verify your UI supports functions in responses before wiring; check its docs for 'tool calling' support
  • Streaming responses not yet implemented — Disable streaming in the client; use non-streaming endpoints
함께 쓰기: filesystem · fetch

Give your own Python/JS agent framework MCP tool access

👤 Devs building custom agents on OpenAI SDK ⏱ ~25 min intermediate

언제 쓸까: You're building with the raw OpenAI SDK (or LangChain's OpenAI client) and want to plug in the MCP ecosystem without rewriting the agent.

흐름
  1. Start MCP-Bridge locally
    Run MCP-Bridge with upstream set to OpenAI and these MCP servers: [list].✓ 복사됨
    → Bridge listening on :8000
  2. Point OpenAI client base_url at the bridge
    Show me Python SDK init: client = OpenAI(base_url='http://localhost:8000/v1', api_key=...). Then call chat completions.✓ 복사됨
    → Code snippet that works unchanged

결과: Zero-touch tool access for your existing agent code.

함정
  • Bridge is a single point of failure — For prod, run with supervisord/systemd and healthcheck endpoint

조합

다른 MCP와 조합해 10배 효율

bridge + filesystem + fetch

Budget self-hosted ChatGPT replacement with real tool use

Expose filesystem (rooted at ~/Notes) and fetch via MCP-Bridge, then use LibreChat to browse + summarize.✓ 복사됨

도구

이 MCP가 노출하는 것

도구입력언제 호출비용
POST /v1/chat/completions OpenAI-compatible messages + tools omitted (auto-injected) Main entrypoint — drop-in for OpenAI 1 LLM call + N tool calls
GET /tools Discover what's available free
SSE /bridge Attach an external MCP client to the bridge over SSE free

비용 및 제한

운영 비용

API 쿼터
Pass-through — whatever your upstream inference provider charges
호출당 토큰
Bridge adds ~100-500 tokens of tool definitions per request
금액
Free (MIT). You pay for your LLM + wherever you host it.
Only attach MCP servers you need — every attached tool bloats the system prompt.

보안

권한, 시크릿, 파급범위

자격 증명 저장: Upstream API key + MCP server creds in config.json; lock down file permissions
데이터 외부 송신: Requests go to your configured upstream (e.g. OpenAI) + whichever MCP servers
절대 부여 금지: Expose the bridge to the internet without enabling bearer auth

문제 해결

자주 발생하는 오류와 해결

Client says 'tool_use not supported'

Upstream model or client UI doesn't support function calling. Use a model that does (gpt-4o, claude, llama 3.1+).

MCP server connection refused

Check the command in config.json actually runs. Bridge runs it as subprocess; test manually: npx -y the-mcp.

401 from bridge when auth enabled

Set Authorization: Bearer <key> header; the key must be in config under security.auth.keys.

대안

MCP-Bridge 다른 것과 비교

대안언제 쓰나단점/장점
Open WebUI native MCPYou specifically use Open WebUI 0.6.31+Built-in — no bridge needed, but Open WebUI only
LiteLLM with custom callbacksYou want multi-provider routing + tool injectionMore complex; LiteLLM doesn't natively speak MCP either
mcpoYou want to expose MCP tools as plain OpenAPI for non-LLM clients tooDifferent shape — OpenAPI-first rather than chat-completions-first

더 보기

리소스

📖 GitHub에서 공식 README 읽기

🐙 열린 이슈 보기

🔍 400+ MCP 서버 및 Skills 전체 보기