/ Verzeichnis / Playground / opik-mcp
● Community comet-ml ⚡ Sofort

opik-mcp

von comet-ml · comet-ml/opik-mcp

Comet's official Opik MCP — manage prompts, projects, traces, and metrics of your LLM apps from Claude or Cursor without switching tabs.

Opik is an LLM observability platform (prompts, traces, evals, datasets). This official MCP gives your IDE/agent access to those primitives: list traces, pull prompts, create datasets, inspect metrics. Works with Opik Cloud or self-hosted.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

opik.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "opik",
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "opik": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "opik-mcp"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add opik -- npx -y opik-mcp

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: opik-mcp

Pull a production trace into your IDE to debug a bad LLM response

👤 LLM app developers ⏱ ~15 min intermediate

Wann einsetzen: A user reports a wrong answer; the trace is in Opik; you want to inspect it without leaving Cursor.

Voraussetzungen
  • Opik API key — comet.com/site > API Keys (or self-hosted admin)
Ablauf
  1. Find the trace
    Search traces in project 'prod-chatbot' where output contains 'I cannot help with that'. Last 24h.✓ Kopiert
    → Matching trace IDs + timestamps
  2. Inspect
    Open trace ID abc123. Show me the full message chain, tools called, and intermediate reasoning.✓ Kopiert
    → Full trace object
  3. Form hypothesis
    Why might the model have refused? Compare this trace to a successful one on the same prompt template.✓ Kopiert
    → Diff + hypothesis

Ergebnis: Faster trace-driven debugging without app-switching.

Fallstricke
  • PII in traces — Configure Opik's redaction before enabling MCP access broadly

Iterate on a prompt template with version tracking

👤 Prompt engineers ⏱ ~25 min advanced

Wann einsetzen: You're tuning a system prompt and want each version saved to Opik for rollback.

Ablauf
  1. Pull current version
    Get latest version of prompt 'support-agent-system'.✓ Kopiert
    → Current prompt body
  2. Edit and commit
    Propose a change to handle escalations better. Show diff. Commit as a new version with message 'add escalation path'.✓ Kopiert
    → Diff + new version ID
  3. Eval against dataset
    Run this new version against dataset 'support-eval-v1'. Compare pass rate vs previous version.✓ Kopiert
    → Metric comparison

Ergebnis: Data-driven prompt changes, version-controlled.

Fallstricke
  • No guardrails — a regressive prompt becomes prod — Use Opik's experiment framework: don't promote until pass rate ≥ baseline

Generate a weekly LLM app health report

👤 Eng leads, LLM app PMs ⏱ ~30 min intermediate

Wann einsetzen: You want a Monday-morning digest of cost, latency, error rate, and top failure categories.

Ablauf
  1. Pull last week's metrics
    For project 'prod-chatbot': total traces, total tokens, avg latency p50/p95, error count — over last 7 days.✓ Kopiert
    → Metrics block
  2. Classify failures
    Sample 20 failed traces. Cluster by failure mode. Rank clusters by frequency.✓ Kopiert
    → Failure taxonomy
  3. Write the digest
    Compose a Markdown digest with the metrics and top 3 failure modes, ready for Slack.✓ Kopiert
    → Shareable report

Ergebnis: Weekly LLM ops awareness without manual dashboard time.

Fallstricke
  • Metric drift as your app evolves — Version the report template; compare apples to apples week over week
Kombinieren mit: notion

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

opik + github

When a prompt regresses, open a GitHub issue with the failing trace

If pass rate drops >5% on 'support-eval-v1' vs last week, create a GitHub issue with the top 3 failing trace IDs.✓ Kopiert
opik + notion

Publish weekly LLM health digest to Notion

Compose a Monday digest from last week's Opik metrics and create a Notion page in 'LLM Weekly'.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
list_projects workspace_id? Navigate your workspace 1 API call
list_traces project, filter?, start?, end?, limit? Find traces by time range or content 1 API call
get_trace trace_id Deep-dive a single trace 1 API call
get_prompt name, version? Read a prompt for editing or use in code 1 API call
create_prompt_version name, template, message? Commit a new prompt iteration 1 API call
create_dataset name, items[] Build an eval dataset 1 API call
get_metrics project, metric, window Monitor cost / latency / quality 1 API call

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Opik Cloud has per-plan limits; self-hosted is unlimited
Tokens pro Aufruf
Trace listings 1k-5k tokens; single traces 500-3000
Kosten in €
Opik has a generous free tier; paid plans for scale. MCP itself is free (Apache 2.0).
Tipp
Use list_traces with a time window; never call without a range on a busy project.

Sicherheit

Rechte, Secrets, Reichweite

Minimale Scopes: Opik API key scope the workspace you intend to expose
Credential-Speicherung: OPIK_API_KEY env var; HTTP transport uses Authorization: Bearer
Datenabfluss: Traces may contain prompts/responses with PII — understand your Opik region and redaction setup
Niemals gewähren: An admin-scope key to a shared dev machine

Fehlerbehebung

Häufige Fehler und Lösungen

401 Unauthorized (Bearer)

Check OPIK_API_KEY. For self-hosted, also set --apiUrl http://host:5173/api.

Prüfen: curl -H 'Authorization: Bearer $KEY' $URL/api/v1/workspaces
Empty trace list despite traffic

Wrong project / workspace. List projects first and confirm UUID.

Self-hosted MCP can't reach backend

Use container networking (same docker network) or map --apiUrl to an externally-reachable URL.

Alternativen

opik-mcp vs. andere

AlternativeWann stattdessenKompromiss
LangSmith MCPYou use LangSmith for tracingDifferent platform; similar capabilities
Langfuse MCPYou use Langfuse (OSS)Also OSS + self-hostable; different schemas
Arize / PhoenixYou want focus on evals + drift detectionRicher ML-monitoring features; steeper learning curve

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen