/ Annuaire / Playground / mcp-documentation-server
● Communauté andrea9293 ⚡ Instantané

mcp-documentation-server

par andrea9293 · andrea9293/mcp-documentation-server

Drop PDFs, Markdown, and text docs into a local vector store — then ask your AI questions with hybrid search. No cloud required.

mcp-documentation-server by andrea9293is a local RAG server. Drag-and-drop .txt / .md / .pdf files via a web UI (port 3080), or feed them via tools. Hybrid full-text + vector search with parent-child chunking. Runs fully local with built-in embeddings; Gemini key optional for smarter retrieval.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

documentation-server.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "documentation-server",
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "documentation-server": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mcp-documentation-server"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add documentation-server -- npx -y mcp-documentation-server

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : mcp-documentation-server

Make a new framework's docs queryable by your AI

👤 Devs adopting a new library ⏱ ~20 min beginner

Quand l'utiliser : Official docs are huge; you want Claude to answer with grounded citations.

Prérequis
  • mcp-documentation-server installed — npx -y @andrea9293/mcp-documentation-server
Déroulement
  1. Ingest the docs
    Upload the library's docs .md files to the dashboard at http://localhost:3080.✓ Copié
    → Files processed into chunks
  2. Ask targeted questions
    search_documents for 'how to configure middleware' — give me the top 3 chunks with source paths.✓ Copié
    → Cited excerpts
  3. Ask grounded synthesis
    Given those chunks, write the minimum viable config for middleware in this framework.✓ Copié
    → Working config backed by cited doc lines

Résultat : A personal docs assistant that cites its sources.

Pièges
  • PDFs with scanned images aren't OCR'd — Pre-OCR with tools like ocrmypdf before upload
  • Huge doc sets without Gemini give noisy embeddings — Optional GEMINI_API_KEY unlocks higher-quality semantic search
Combiner avec : filesystem

Turn an internal wiki export into a RAG source

👤 Teams with markdown-export-friendly wikis ⏱ ~25 min beginner

Quand l'utiliser : You've exported your Notion/Confluence content as Markdown and want AI access.

Déroulement
  1. Bulk-ingest via process_uploads
    process_uploads on ./wiki-export/ — process every .md.✓ Copié
    → Document count per folder
  2. Full-scope search
    search_all_documents: 'deployment runbook' — top 5.✓ Copié
    → Ranked list

Résultat : A local, private, searchable wiki.

Build a personal research-paper library

👤 Researchers, students ⏱ ~30 min beginner

Quand l'utiliser : You download papers and want them queryable instead of piled in Downloads/

Déroulement
  1. Drop PDFs in
    Upload all PDFs in ~/Papers/ to the documentation server.✓ Copié
    → Papers chunked and indexed
  2. Ask across corpus
    search_documents: 'attention variants with lower quadratic cost' — return authors + years.✓ Copié
    → Cited excerpts

Résultat : A local mini-perplexity over your own paper collection.

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

documentation-server + filesystem

Automate ingest from a watched folder

Every time a new PDF lands in ~/Papers/Inbox, process_uploads it into the documentation server.✓ Copié
documentation-server + swarmvault

Compare: documentation-server is quick-ingest; swarmvault builds a structured wiki

Ingest my research PDFs into both systems; compare retrieval quality on the same query.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
add_document title, content, metadata? Programmatic ingest free (local embeddings)
list_documents (none) See what's indexed free
get_document id Retrieve a specific doc free
delete_document id Pruning free
search_documents query, top_k? Query within a specific doc set free
search_all_documents query, top_k? Global RAG query free
get_context_window chunk_id Expand a narrow hit into broader context free
search_documents_with_ai query One-shot answer synthesis Gemini call (needs key)
process_uploads path?: str Batch import from the uploads folder free

Coût et limites

Coût d'exécution

Quota d'API
None if local; Gemini usage if GEMINI_API_KEY set
Tokens par appel
Search returns 500-3000 tokens depending on top_k
Monétaire
Free; Gemini is paid per-call if enabled
Astuce
Skip Gemini for exploratory work — local embeddings are good enough for known-item lookups.

Sécurité

Permissions, secrets, portée

Stockage des identifiants : GEMINI_API_KEY (optional) in env
Sortie de données : Local only unless Gemini is enabled; dashboard on port 3080

Dépannage

Erreurs courantes et correctifs

Port 3080 in use

Set WEB_PORT env var to another port.

Vérifier : lsof -i :3080
PDF parse error

Password-protected or scanned PDFs fail. Remove password or run OCR first.

Vérifier : Try a plain PDF
search returns empty

Check documents ingested: list_documents. If empty, re-run process_uploads.

Vérifier : list_documents

Alternatives

mcp-documentation-server vs autres

AlternativeQuand l'utiliserCompromis
swarmvaultYou want a structured wiki + knowledge graph, not just searchHeavier; more upfront setup
Cloud RAG (Pinecone, Weaviate)You need team-shared, scalePaid; data leaves your machine
llm-context.pyYou want per-task context, not persistent doc retrievalDifferent problem

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills