/ Verzeichnis / Playground / mcp-documentation-server
● Community andrea9293 ⚡ Sofort

mcp-documentation-server

von andrea9293 · andrea9293/mcp-documentation-server

Drop PDFs, Markdown, and text docs into a local vector store — then ask your AI questions with hybrid search. No cloud required.

mcp-documentation-server by andrea9293is a local RAG server. Drag-and-drop .txt / .md / .pdf files via a web UI (port 3080), or feed them via tools. Hybrid full-text + vector search with parent-child chunking. Runs fully local with built-in embeddings; Gemini key optional for smarter retrieval.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

documentation-server.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "documentation-server",
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "documentation-server": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mcp-documentation-server"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add documentation-server -- npx -y mcp-documentation-server

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: mcp-documentation-server

Make a new framework's docs queryable by your AI

👤 Devs adopting a new library ⏱ ~20 min beginner

Wann einsetzen: Official docs are huge; you want Claude to answer with grounded citations.

Voraussetzungen
  • mcp-documentation-server installed — npx -y @andrea9293/mcp-documentation-server
Ablauf
  1. Ingest the docs
    Upload the library's docs .md files to the dashboard at http://localhost:3080.✓ Kopiert
    → Files processed into chunks
  2. Ask targeted questions
    search_documents for 'how to configure middleware' — give me the top 3 chunks with source paths.✓ Kopiert
    → Cited excerpts
  3. Ask grounded synthesis
    Given those chunks, write the minimum viable config for middleware in this framework.✓ Kopiert
    → Working config backed by cited doc lines

Ergebnis: A personal docs assistant that cites its sources.

Fallstricke
  • PDFs with scanned images aren't OCR'd — Pre-OCR with tools like ocrmypdf before upload
  • Huge doc sets without Gemini give noisy embeddings — Optional GEMINI_API_KEY unlocks higher-quality semantic search
Kombinieren mit: filesystem

Turn an internal wiki export into a RAG source

👤 Teams with markdown-export-friendly wikis ⏱ ~25 min beginner

Wann einsetzen: You've exported your Notion/Confluence content as Markdown and want AI access.

Ablauf
  1. Bulk-ingest via process_uploads
    process_uploads on ./wiki-export/ — process every .md.✓ Kopiert
    → Document count per folder
  2. Full-scope search
    search_all_documents: 'deployment runbook' — top 5.✓ Kopiert
    → Ranked list

Ergebnis: A local, private, searchable wiki.

Build a personal research-paper library

👤 Researchers, students ⏱ ~30 min beginner

Wann einsetzen: You download papers and want them queryable instead of piled in Downloads/

Ablauf
  1. Drop PDFs in
    Upload all PDFs in ~/Papers/ to the documentation server.✓ Kopiert
    → Papers chunked and indexed
  2. Ask across corpus
    search_documents: 'attention variants with lower quadratic cost' — return authors + years.✓ Kopiert
    → Cited excerpts

Ergebnis: A local mini-perplexity over your own paper collection.

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

documentation-server + filesystem

Automate ingest from a watched folder

Every time a new PDF lands in ~/Papers/Inbox, process_uploads it into the documentation server.✓ Kopiert
documentation-server + swarmvault

Compare: documentation-server is quick-ingest; swarmvault builds a structured wiki

Ingest my research PDFs into both systems; compare retrieval quality on the same query.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
add_document title, content, metadata? Programmatic ingest free (local embeddings)
list_documents (none) See what's indexed free
get_document id Retrieve a specific doc free
delete_document id Pruning free
search_documents query, top_k? Query within a specific doc set free
search_all_documents query, top_k? Global RAG query free
get_context_window chunk_id Expand a narrow hit into broader context free
search_documents_with_ai query One-shot answer synthesis Gemini call (needs key)
process_uploads path?: str Batch import from the uploads folder free

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
None if local; Gemini usage if GEMINI_API_KEY set
Tokens pro Aufruf
Search returns 500-3000 tokens depending on top_k
Kosten in €
Free; Gemini is paid per-call if enabled
Tipp
Skip Gemini for exploratory work — local embeddings are good enough for known-item lookups.

Sicherheit

Rechte, Secrets, Reichweite

Credential-Speicherung: GEMINI_API_KEY (optional) in env
Datenabfluss: Local only unless Gemini is enabled; dashboard on port 3080

Fehlerbehebung

Häufige Fehler und Lösungen

Port 3080 in use

Set WEB_PORT env var to another port.

Prüfen: lsof -i :3080
PDF parse error

Password-protected or scanned PDFs fail. Remove password or run OCR first.

Prüfen: Try a plain PDF
search returns empty

Check documents ingested: list_documents. If empty, re-run process_uploads.

Prüfen: list_documents

Alternativen

mcp-documentation-server vs. andere

AlternativeWann stattdessenKompromiss
swarmvaultYou want a structured wiki + knowledge graph, not just searchHeavier; more upfront setup
Cloud RAG (Pinecone, Weaviate)You need team-shared, scalePaid; data leaves your machine
llm-context.pyYou want per-task context, not persistent doc retrievalDifferent problem

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen