/ Annuaire / Playground / opik-mcp
● Communauté comet-ml ⚡ Instantané

opik-mcp

par comet-ml · comet-ml/opik-mcp

Comet's official Opik MCP — manage prompts, projects, traces, and metrics of your LLM apps from Claude or Cursor without switching tabs.

Opik is an LLM observability platform (prompts, traces, evals, datasets). This official MCP gives your IDE/agent access to those primitives: list traces, pull prompts, create datasets, inspect metrics. Works with Opik Cloud or self-hosted.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

opik.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "opik",
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "opik": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "opik-mcp"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add opik -- npx -y opik-mcp

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : opik-mcp

Pull a production trace into your IDE to debug a bad LLM response

👤 LLM app developers ⏱ ~15 min intermediate

Quand l'utiliser : A user reports a wrong answer; the trace is in Opik; you want to inspect it without leaving Cursor.

Prérequis
  • Opik API key — comet.com/site > API Keys (or self-hosted admin)
Déroulement
  1. Find the trace
    Search traces in project 'prod-chatbot' where output contains 'I cannot help with that'. Last 24h.✓ Copié
    → Matching trace IDs + timestamps
  2. Inspect
    Open trace ID abc123. Show me the full message chain, tools called, and intermediate reasoning.✓ Copié
    → Full trace object
  3. Form hypothesis
    Why might the model have refused? Compare this trace to a successful one on the same prompt template.✓ Copié
    → Diff + hypothesis

Résultat : Faster trace-driven debugging without app-switching.

Pièges
  • PII in traces — Configure Opik's redaction before enabling MCP access broadly

Iterate on a prompt template with version tracking

👤 Prompt engineers ⏱ ~25 min advanced

Quand l'utiliser : You're tuning a system prompt and want each version saved to Opik for rollback.

Déroulement
  1. Pull current version
    Get latest version of prompt 'support-agent-system'.✓ Copié
    → Current prompt body
  2. Edit and commit
    Propose a change to handle escalations better. Show diff. Commit as a new version with message 'add escalation path'.✓ Copié
    → Diff + new version ID
  3. Eval against dataset
    Run this new version against dataset 'support-eval-v1'. Compare pass rate vs previous version.✓ Copié
    → Metric comparison

Résultat : Data-driven prompt changes, version-controlled.

Pièges
  • No guardrails — a regressive prompt becomes prod — Use Opik's experiment framework: don't promote until pass rate ≥ baseline

Generate a weekly LLM app health report

👤 Eng leads, LLM app PMs ⏱ ~30 min intermediate

Quand l'utiliser : You want a Monday-morning digest of cost, latency, error rate, and top failure categories.

Déroulement
  1. Pull last week's metrics
    For project 'prod-chatbot': total traces, total tokens, avg latency p50/p95, error count — over last 7 days.✓ Copié
    → Metrics block
  2. Classify failures
    Sample 20 failed traces. Cluster by failure mode. Rank clusters by frequency.✓ Copié
    → Failure taxonomy
  3. Write the digest
    Compose a Markdown digest with the metrics and top 3 failure modes, ready for Slack.✓ Copié
    → Shareable report

Résultat : Weekly LLM ops awareness without manual dashboard time.

Pièges
  • Metric drift as your app evolves — Version the report template; compare apples to apples week over week
Combiner avec : notion

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

opik + github

When a prompt regresses, open a GitHub issue with the failing trace

If pass rate drops >5% on 'support-eval-v1' vs last week, create a GitHub issue with the top 3 failing trace IDs.✓ Copié
opik + notion

Publish weekly LLM health digest to Notion

Compose a Monday digest from last week's Opik metrics and create a Notion page in 'LLM Weekly'.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
list_projects workspace_id? Navigate your workspace 1 API call
list_traces project, filter?, start?, end?, limit? Find traces by time range or content 1 API call
get_trace trace_id Deep-dive a single trace 1 API call
get_prompt name, version? Read a prompt for editing or use in code 1 API call
create_prompt_version name, template, message? Commit a new prompt iteration 1 API call
create_dataset name, items[] Build an eval dataset 1 API call
get_metrics project, metric, window Monitor cost / latency / quality 1 API call

Coût et limites

Coût d'exécution

Quota d'API
Opik Cloud has per-plan limits; self-hosted is unlimited
Tokens par appel
Trace listings 1k-5k tokens; single traces 500-3000
Monétaire
Opik has a generous free tier; paid plans for scale. MCP itself is free (Apache 2.0).
Astuce
Use list_traces with a time window; never call without a range on a busy project.

Sécurité

Permissions, secrets, portée

Portées minimales : Opik API key scope the workspace you intend to expose
Stockage des identifiants : OPIK_API_KEY env var; HTTP transport uses Authorization: Bearer
Sortie de données : Traces may contain prompts/responses with PII — understand your Opik region and redaction setup
Ne jamais accorder : An admin-scope key to a shared dev machine

Dépannage

Erreurs courantes et correctifs

401 Unauthorized (Bearer)

Check OPIK_API_KEY. For self-hosted, also set --apiUrl http://host:5173/api.

Vérifier : curl -H 'Authorization: Bearer $KEY' $URL/api/v1/workspaces
Empty trace list despite traffic

Wrong project / workspace. List projects first and confirm UUID.

Self-hosted MCP can't reach backend

Use container networking (same docker network) or map --apiUrl to an externally-reachable URL.

Alternatives

opik-mcp vs autres

AlternativeQuand l'utiliserCompromis
LangSmith MCPYou use LangSmith for tracingDifferent platform; similar capabilities
Langfuse MCPYou use Langfuse (OSS)Also OSS + self-hostable; different schemas
Arize / PhoenixYou want focus on evals + drift detectionRicher ML-monitoring features; steeper learning curve

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills