/ Verzeichnis / Playground / MongoDB
● Offiziell mongodb-js 🔑 Eigener Schlüssel nötig

MongoDB

von mongodb-js · mongodb-js/mongodb-mcp-server

Let Claude query, aggregate, and administer MongoDB Atlas or self-hosted clusters — with read-only defaults you can relax per-tool.

MongoDB's official MCP covers both the driver (CRUD + aggregation on any cluster) and the Atlas control plane (list projects, clusters, users). By default it runs in read-only mode; enable writes explicitly per command family when you need them.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

mongodb.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mongodb": {
      "command": "npx",
      "args": [
        "-y",
        "mongodb-mcp-server"
      ]
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mongodb": {
      "command": "npx",
      "args": [
        "-y",
        "mongodb-mcp-server"
      ]
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mongodb": {
      "command": "npx",
      "args": [
        "-y",
        "mongodb-mcp-server"
      ]
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mongodb": {
      "command": "npx",
      "args": [
        "-y",
        "mongodb-mcp-server"
      ]
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mongodb",
      "command": "npx",
      "args": [
        "-y",
        "mongodb-mcp-server"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mongodb": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mongodb-mcp-server"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add mongodb -- npx -y mongodb-mcp-server

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: MongoDB

Answer business questions with Mongo aggregation pipelines

👤 PMs and analysts on a Mongo-backed product ⏱ ~15 min beginner

Wann einsetzen: You need counts, funnels, or top-N lists and don't want to learn $group/$lookup syntax.

Voraussetzungen
  • Read-only connection string — Atlas: create a DB user with readAnyDatabase. Self-hosted: user with read role on relevant DBs.
Ablauf
  1. Discover collections
    List databases, then for app_prod list all collections and their approximate document counts.✓ Kopiert
    → Collection catalog
  2. Sample and infer schema
    Sample 20 docs from users and orders. Describe the fields and types you see.✓ Kopiert
    → Per-collection schema description
  3. Run the actual aggregation
    How many orders were placed per country in the last 30 days? Sort desc and limit 20.✓ Kopiert
    → Results table with the pipeline used

Ergebnis: Business answers with the exact pipeline preserved for re-running.

Fallstricke
  • Aggregations without indexes can scan huge collections — Always check .explain() first and make sure a supporting index exists; otherwise add $match narrowly on an indexed field at the top of the pipeline
Kombinieren mit: notion

Infer and document a messy collection's actual schema

👤 New engineer onboarding to an undocumented Mongo ⏱ ~25 min intermediate

Wann einsetzen: The collection was 'schemaless' for 3 years and nobody knows which fields actually exist.

Ablauf
  1. Sample broadly
    Sample 500 docs from events. For each top-level field, report presence %, type(s), and a sample value.✓ Kopiert
    → Field-by-field presence/type matrix
  2. Find schema drift
    Which fields have multiple types across docs? Group by (field, type) and count.✓ Kopiert
    → List of polymorphic fields
  3. Produce a TypeScript type or JSON schema
    Generate a TypeScript interface for the 'stable' fields (≥95% presence, single type). Mark the rest as optional or unknown.✓ Kopiert
    → Usable type definition

Ergebnis: A documented schema with known quirks — the basis for migrations or a validator.

Fallstricke
  • 500 docs may miss rare but important variants — Sample by time bucket (one per month) to catch legacy shapes
Kombinieren mit: filesystem

Audit your Atlas projects for security and cost

👤 DevOps/platform teams on Atlas ⏱ ~20 min intermediate

Wann einsetzen: Quarterly: check which clusters are oversized, which have wide IP allowlists, who has access.

Voraussetzungen
  • Atlas API public+private key — cloud.mongodb.com → Organization Access → API keys; project-scoped
Ablauf
  1. List projects + clusters
    List every project and within each, every cluster with its tier, region, and backup status.✓ Kopiert
    → Full inventory
  2. Flag risky access
    For each project, dump the IP access list. Flag any entry of 0.0.0.0/0 with the project name.✓ Kopiert
    → Risky-access report
  3. Suggest rightsizing
    Any cluster on M30+ with fewer than 10GB used? Recommend downgrades.✓ Kopiert
    → Cost-savings list

Ergebnis: A short remediation list for your security + finance folks.

Fallstricke
  • API key scope too narrow to see every project — Use an Organization-level key in read-only mode rather than a project-level one
Kombinieren mit: notion

Propose and execute a one-off data cleanup safely

👤 Backend engineer fixing a data bug ⏱ ~30 min advanced

Wann einsetzen: A bug caused bad writes; you need to fix ~10k docs but must not nuke the wrong ones.

Voraussetzungen
  • Writable user (scoped to the target DB only) — Atlas: role with readWrite on that one DB, nothing else
  • --read-only OFF explicitly for this session — Start the MCP without --read-only
Ablauf
  1. Scope the fix with a count
    Count docs in users where status='active' AND last_login IS NULL AND created_at < 2024-01-01. Don't modify anything.✓ Kopiert
    → Expected-affected count, e.g. 9,873
  2. Dry-run the update
    Show the updateMany pipeline you would run (filter + $set), and show 5 sample docs that would be changed. Do NOT execute.✓ Kopiert
    → Filter + set preview
  3. Execute with a limit and verify
    Run the update. Then re-run the original count — it should be 0. Report matchedCount and modifiedCount.✓ Kopiert
    → Counts match expectation; verify query returns 0

Ergebnis: A clean, auditable fix with counts before and after.

Fallstricke
  • updateMany with a bad filter nukes the whole collection — Always run the filter as countDocuments first; if the count is surprising, stop and investigate
  • No backup of the affected slice — Copy the matching docs to a <collection>_backup_<date> collection before updating
Kombinieren mit: filesystem

Suggest missing indexes from slow-query patterns

👤 Backend engineers hunting perf issues ⏱ ~25 min advanced

Wann einsetzen: Your app is slow on Mongo queries; you want a targeted index plan, not a shotgun.

Ablauf
  1. Check existing indexes
    For orders and users, list every index with its keys and size on disk.✓ Kopiert
    → Index catalog
  2. Profile a specific query
    Run .explain('executionStats') on this query [paste]. Report totalDocsExamined vs nReturned and the winning plan stage.✓ Kopiert
    → Explain output
  3. Propose the smallest new index
    Given that plan, propose exactly one index that would convert this to IXSCAN. Justify field order.✓ Kopiert
    → Concrete createIndex command with rationale

Ergebnis: A single, justifiable index recommendation per slow query — not a wall of them.

Fallstricke
  • Over-indexing kills write throughput and balloons storage — Only add an index that serves >1 high-traffic query; compound indexes should reflect ESR (Equality, Sort, Range) order

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

mongodb + notion

Aggregate, then post a shareable report

Compute MAU per plan tier for the last 6 months and create a Notion page in 'Growth / Monthly' with the results as a table.✓ Kopiert
mongodb + filesystem

Backup a collection slice as JSONL before a cleanup

Find all docs in users matching <filter>, save to /backups/users-cleanup-<date>.jsonl, then delete them.✓ Kopiert
mongodb + postgres

Cross-DB reconciliation when migrating off Mongo

For every user_id in Mongo users, check whether a corresponding row exists in Postgres users. Report mismatches.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
list_databases Start of any exploration session free
list_collections database: str Inventory a database free
find database, collection, filter?, projection?, sort?, limit? Read documents — the main read workhorse free
aggregate database, collection, pipeline: stage[] Grouping, joining, analytics free
count database, collection, filter? Always before destructive writes — confirm scope free
insert_one / insert_many database, collection, document(s) Requires --read-only off writes
update_one / update_many database, collection, filter, update Always preview filter with count first writes
delete_one / delete_many database, collection, filter Dangerous — require explicit user confirmation writes
list_indexes database, collection Perf analysis before suggesting new indexes free
atlas_list_projects / atlas_list_clusters Atlas control-plane audits free

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Driver: bounded by cluster connection limit. Atlas API: 100 req/min per key.
Tokens pro Aufruf
Find/aggregate: scales with result size; use projections and limits.
Kosten in €
Free against your existing cluster. Atlas has a free M0 tier for testing.
Tipp
Always project only the fields you need; unbounded finds return large docs that chew context and egress.

Sicherheit

Rechte, Secrets, Reichweite

Minimale Scopes: readAnyDatabase (read-only) or read on specific DBs
Credential-Speicherung: MDB_MCP_CONNECTION_STRING for driver; MDB_MCP_API_CLIENT_ID + MDB_MCP_API_CLIENT_SECRET for Atlas
Datenabfluss: Driver connects to your cluster; Atlas API to cloud.mongodb.com only
Niemals gewähren: dbAdminAnyDatabase userAdminAnyDatabase root

Fehlerbehebung

Häufige Fehler und Lösungen

MongoServerError: Authentication failed

Connection string user/password wrong or the user lacks auth DB. Add ?authSource=admin for Atlas.

Prüfen: mongosh '$MDB_MCP_CONNECTION_STRING' --eval 'db.runCommand({ping:1})'
MongoNetworkError: ETIMEDOUT

IP not in Atlas allowlist. Add your current IP in Atlas → Network Access.

Prüfen: curl ifconfig.me then compare
not authorized on admin to execute command listDatabases

Role too narrow. Grant clusterMonitor or scope tools to a specific DB via listCollections instead.

Write rejected / running in read-only mode

Restart the MCP without --read-only; only do this for the specific fix session.

Alternativen

MongoDB vs. andere

AlternativeWann stattdessenKompromiss
Postgres MCPYou're on Postgres, or considering migrating from MongoRelational — document-style flexibility gone
DBHubYou need one MCP for Mongo + several SQL DBsShallower Mongo feature coverage than the official server

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen