/ Verzeichnis / Playground / IBM Context Forge
● Offiziell IBM 🔑 Eigener Schlüssel nötig

IBM Context Forge

von IBM · IBM/mcp-context-forge

IBM's AI gateway for MCP fleets — federate servers, add auth, rate limit, observe, and translate REST/gRPC into MCP at scale.

ContextForge is an open-source gateway, registry, and proxy sitting in front of many MCP / A2A / REST / gRPC backends. Exposes one unified MCP endpoint with centralized auth, rate limiting, OpenTelemetry tracing, and an admin UI. For enterprises that need to govern dozens of MCP servers, not just run one.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

mcp-context-forge.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mcp-context-forge": {
      "command": "uvx",
      "args": [
        "mcp-context-forge"
      ]
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mcp-context-forge": {
      "command": "uvx",
      "args": [
        "mcp-context-forge"
      ]
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mcp-context-forge": {
      "command": "uvx",
      "args": [
        "mcp-context-forge"
      ]
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mcp-context-forge": {
      "command": "uvx",
      "args": [
        "mcp-context-forge"
      ]
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mcp-context-forge",
      "command": "uvx",
      "args": [
        "mcp-context-forge"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mcp-context-forge": {
      "command": {
        "path": "uvx",
        "args": [
          "mcp-context-forge"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add mcp-context-forge -- uvx mcp-context-forge

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: IBM Context Forge

Centralize 10+ MCP servers behind one gateway

👤 Platform engineers at mid/large orgs ⏱ ~120 min advanced

Wann einsetzen: Different teams run different MCPs. You need one URL for clients, one audit log, one auth story.

Voraussetzungen
  • Docker/Kubernetes environment — Official images at ghcr.io; Helm chart available
  • An auth provider (or use built-in JWT) — Existing SSO / OIDC / static JWT signer
Ablauf
  1. Deploy the gateway
    Deploy mcp-contextforge-gateway via Helm with Redis for federation state. Point it at our OIDC provider.✓ Kopiert
    → Admin UI loads, auth works
  2. Register backends
    Register 3 backend MCPs (github, postgres, our-custom) in the admin UI. Apply rate limits: github=100/min, postgres=30/min.✓ Kopiert
    → Backends appear as healthy in registry
  3. Repoint clients
    Update teammate Claude Desktop configs to use a single mcp-remote https://mcp-gw.company.com/mcp with their JWT.✓ Kopiert
    → All backend tools available through one connection

Ergebnis: One place to manage MCP access across the org — centralized like any other API gateway.

Fallstricke
  • Rate limits applied globally but teams have different needs — Use per-user or per-JWT-claim rate limits via the policy engine — don't apply one limit to all
  • Gateway becomes single point of failure — Run at least 2 replicas with Redis-backed session state; health-check the /health endpoint
Kombinieren mit: cloud-run

Virtualize a REST API as MCP without writing a server

👤 Platform engineers without Python/TS bandwidth ⏱ ~60 min intermediate

Wann einsetzen: You have an internal REST API with an OpenAPI spec. You want MCP access without writing fastapi-mcp or FastMCP code.

Voraussetzungen
  • OpenAPI / Swagger spec for the API — Usually /openapi.json or /swagger.json
Ablauf
  1. Upload the OpenAPI spec
    In ContextForge admin, register a new REST backend. Upload the OpenAPI spec. Confirm tool auto-generation picked up all endpoints.✓ Kopiert
    → Tool list matches route list
  2. Configure auth passthrough
    Set up header forwarding so the Authorization header flows from the MCP client to the upstream REST API.✓ Kopiert
    → Authenticated routes work end-to-end
  3. Filter exposed surface
    Exclude internal/admin routes via path patterns. Add a description override on the 3 most-used tools.✓ Kopiert
    → Clean, agent-friendly tool list

Ergebnis: REST-as-MCP with zero new service code — an OpenAPI spec is enough.

Fallstricke
  • Auto-generated tool names are awful — Set explicit operationIds in your OpenAPI spec or override names in ContextForge per route

Add tracing and analytics to all MCP calls across your org

👤 SRE / platform observability leads ⏱ ~90 min advanced

Wann einsetzen: You want to answer 'what did the agents do today?' across every team using MCP.

Voraussetzungen
  • An OTel backend (Phoenix, Jaeger, Grafana Tempo) — Running endpoint that accepts OTLP
Ablauf
  1. Enable OTel export
    Configure the gateway's otel.endpoint to point at our Phoenix instance. Include tool name, latency, user, outcome in spans.✓ Kopiert
    → Spans appear in Phoenix within seconds of calls
  2. Build dashboards
    Create dashboards: top 10 tools by call volume, p95 latency per backend, error rates per user.✓ Kopiert
    → Dashboards populated
  3. Alert on anomalies
    Alert on: error rate >5% for any backend, or a single user burning >10k calls/hour.✓ Kopiert
    → Test alerts fire in staging

Ergebnis: Org-wide MCP visibility — you know who uses what and when it breaks.

Fallstricke
  • OTel span cardinality explodes with per-request IDs as span names — Keep span names to tool names; put request IDs in attributes, not names
Kombinieren mit: sentry

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

mcp-context-forge + cloud-run

Deploy ContextForge on Cloud Run, federate GCP-hosted MCPs behind it

Deploy ContextForge to Cloud Run with IAM auth. Register our 3 internal MCPs (also on Cloud Run) as backends.✓ Kopiert
mcp-context-forge + sentry

Ship gateway traces + errors to Sentry for ops visibility

Configure the gateway's OTel export to also push errors into Sentry for on-call visibility.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
Gateway federation N registered backends Infra-level; not a per-request tool free
REST → MCP virtualization OpenAPI spec + target URL Onboarding a REST service to MCP passthrough of target API costs
gRPC → MCP translation gRPC service descriptor Same as above, for gRPC backends passthrough
Prompt registry Jinja2 templates + variables Share prompts across teams with versioning free
Resource registry URI-based resources Expose static/ dynamic org content free
Admin API / UI HTTP + web UI Ops/config tasks free

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Self-hosted — whatever your infra supports
Tokens pro Aufruf
Gateway adds ~50ms + minimal schema overhead
Kosten in €
Open source (Apache 2.0); you pay for infra + backends
Tipp
Start with SQLite backend for <10 servers; only move to Redis federation when you need multi-node HA

Sicherheit

Rechte, Secrets, Reichweite

Credential-Speicherung: JWT signing keys in secret manager; never in env vars on container images
Datenabfluss: Gateway → all configured backends; OTel → tracing backend

Fehlerbehebung

Häufige Fehler und Lösungen

Backend marked unhealthy but works when tested directly

Health checks use HEAD or GET /; your backend may only respond to POST. Configure health_check.path per backend.

JWT validation fails

Check iss and aud claims match gateway config. Also verify the JWKS endpoint is reachable from the gateway pod.

Rate limit too aggressive during spikes

Switch from fixed-window to token-bucket policy; set burst=5× average.

Admin UI login loops

Redirect URI in your OIDC provider must match /auth/callback on the gateway's external URL — verify it's set for the exact public hostname.

Alternativen

IBM Context Forge vs. andere

AlternativeWann stattdessenKompromiss
Kong / Apigee + custom pluginsYou already run these and want to extend rather than add a new gatewayNeeds plugin development; MCP not first-class
mcp-use server namespaceSingle-developer use case — just wire multiple MCPs client-sideNo central governance; fine for individuals not orgs
Cloudflare AI GatewayYou want a hosted SaaS gateway, not self-hostedLess MCP-specific functionality; primarily LLM traffic focus

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen