/ Directory / Playground / opik-mcp
● Community comet-ml ⚡ Instant

opik-mcp

by comet-ml · comet-ml/opik-mcp

Comet's official Opik MCP — manage prompts, projects, traces, and metrics of your LLM apps from Claude or Cursor without switching tabs.

Opik is an LLM observability platform (prompts, traces, evals, datasets). This official MCP gives your IDE/agent access to those primitives: list traces, pull prompts, create datasets, inspect metrics. Works with Opik Cloud or self-hosted.

Why use it

Key features

Live Demo

What it looks like in practice

opik.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "opik": {
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "opik",
      "command": "npx",
      "args": [
        "-y",
        "opik-mcp"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "opik": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "opik-mcp"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add opik -- npx -y opik-mcp

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use opik-mcp

Pull a production trace into your IDE to debug a bad LLM response

👤 LLM app developers ⏱ ~15 min intermediate

When to use: A user reports a wrong answer; the trace is in Opik; you want to inspect it without leaving Cursor.

Prerequisites
  • Opik API key — comet.com/site > API Keys (or self-hosted admin)
Flow
  1. Find the trace
    Search traces in project 'prod-chatbot' where output contains 'I cannot help with that'. Last 24h.✓ Copied
    → Matching trace IDs + timestamps
  2. Inspect
    Open trace ID abc123. Show me the full message chain, tools called, and intermediate reasoning.✓ Copied
    → Full trace object
  3. Form hypothesis
    Why might the model have refused? Compare this trace to a successful one on the same prompt template.✓ Copied
    → Diff + hypothesis

Outcome: Faster trace-driven debugging without app-switching.

Pitfalls
  • PII in traces — Configure Opik's redaction before enabling MCP access broadly

Iterate on a prompt template with version tracking

👤 Prompt engineers ⏱ ~25 min advanced

When to use: You're tuning a system prompt and want each version saved to Opik for rollback.

Flow
  1. Pull current version
    Get latest version of prompt 'support-agent-system'.✓ Copied
    → Current prompt body
  2. Edit and commit
    Propose a change to handle escalations better. Show diff. Commit as a new version with message 'add escalation path'.✓ Copied
    → Diff + new version ID
  3. Eval against dataset
    Run this new version against dataset 'support-eval-v1'. Compare pass rate vs previous version.✓ Copied
    → Metric comparison

Outcome: Data-driven prompt changes, version-controlled.

Pitfalls
  • No guardrails — a regressive prompt becomes prod — Use Opik's experiment framework: don't promote until pass rate ≥ baseline

Generate a weekly LLM app health report

👤 Eng leads, LLM app PMs ⏱ ~30 min intermediate

When to use: You want a Monday-morning digest of cost, latency, error rate, and top failure categories.

Flow
  1. Pull last week's metrics
    For project 'prod-chatbot': total traces, total tokens, avg latency p50/p95, error count — over last 7 days.✓ Copied
    → Metrics block
  2. Classify failures
    Sample 20 failed traces. Cluster by failure mode. Rank clusters by frequency.✓ Copied
    → Failure taxonomy
  3. Write the digest
    Compose a Markdown digest with the metrics and top 3 failure modes, ready for Slack.✓ Copied
    → Shareable report

Outcome: Weekly LLM ops awareness without manual dashboard time.

Pitfalls
  • Metric drift as your app evolves — Version the report template; compare apples to apples week over week
Combine with: notion

Combinations

Pair with other MCPs for X10 leverage

opik + github

When a prompt regresses, open a GitHub issue with the failing trace

If pass rate drops >5% on 'support-eval-v1' vs last week, create a GitHub issue with the top 3 failing trace IDs.✓ Copied
opik + notion

Publish weekly LLM health digest to Notion

Compose a Monday digest from last week's Opik metrics and create a Notion page in 'LLM Weekly'.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
list_projects workspace_id? Navigate your workspace 1 API call
list_traces project, filter?, start?, end?, limit? Find traces by time range or content 1 API call
get_trace trace_id Deep-dive a single trace 1 API call
get_prompt name, version? Read a prompt for editing or use in code 1 API call
create_prompt_version name, template, message? Commit a new prompt iteration 1 API call
create_dataset name, items[] Build an eval dataset 1 API call
get_metrics project, metric, window Monitor cost / latency / quality 1 API call

Cost & Limits

What this costs to run

API quota
Opik Cloud has per-plan limits; self-hosted is unlimited
Tokens per call
Trace listings 1k-5k tokens; single traces 500-3000
Monetary
Opik has a generous free tier; paid plans for scale. MCP itself is free (Apache 2.0).
Tip
Use list_traces with a time window; never call without a range on a busy project.

Security

Permissions, secrets, blast radius

Minimum scopes: Opik API key scope the workspace you intend to expose
Credential storage: OPIK_API_KEY env var; HTTP transport uses Authorization: Bearer
Data egress: Traces may contain prompts/responses with PII — understand your Opik region and redaction setup
Never grant: An admin-scope key to a shared dev machine

Troubleshooting

Common errors and fixes

401 Unauthorized (Bearer)

Check OPIK_API_KEY. For self-hosted, also set --apiUrl http://host:5173/api.

Verify: curl -H 'Authorization: Bearer $KEY' $URL/api/v1/workspaces
Empty trace list despite traffic

Wrong project / workspace. List projects first and confirm UUID.

Self-hosted MCP can't reach backend

Use container networking (same docker network) or map --apiUrl to an externally-reachable URL.

Alternatives

opik-mcp vs others

AlternativeWhen to use it insteadTradeoff
LangSmith MCPYou use LangSmith for tracingDifferent platform; similar capabilities
Langfuse MCPYou use Langfuse (OSS)Also OSS + self-hostable; different schemas
Arize / PhoenixYou want focus on evals + drift detectionRicher ML-monitoring features; steeper learning curve

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills