/ Каталог / Песочница / mcp-documentation-server
● Сообщество andrea9293 ⚡ Сразу

mcp-documentation-server

автор andrea9293 · andrea9293/mcp-documentation-server

Drop PDFs, Markdown, and text docs into a local vector store — then ask your AI questions with hybrid search. No cloud required.

mcp-documentation-server by andrea9293is a local RAG server. Drag-and-drop .txt / .md / .pdf files via a web UI (port 3080), or feed them via tools. Hybrid full-text + vector search with parent-child chunking. Runs fully local with built-in embeddings; Gemini key optional for smarter retrieval.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

documentation-server.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "documentation-server": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ],
      "_inferred": true
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "documentation-server",
      "command": "npx",
      "args": [
        "-y",
        "mcp-documentation-server"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "documentation-server": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mcp-documentation-server"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add documentation-server -- npx -y mcp-documentation-server

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: mcp-documentation-server

Make a new framework's docs queryable by your AI

👤 Devs adopting a new library ⏱ ~20 min beginner

Когда использовать: Official docs are huge; you want Claude to answer with grounded citations.

Предварительные требования
  • mcp-documentation-server installed — npx -y @andrea9293/mcp-documentation-server
Поток
  1. Ingest the docs
    Upload the library's docs .md files to the dashboard at http://localhost:3080.✓ Скопировано
    → Files processed into chunks
  2. Ask targeted questions
    search_documents for 'how to configure middleware' — give me the top 3 chunks with source paths.✓ Скопировано
    → Cited excerpts
  3. Ask grounded synthesis
    Given those chunks, write the minimum viable config for middleware in this framework.✓ Скопировано
    → Working config backed by cited doc lines

Итог: A personal docs assistant that cites its sources.

Подводные камни
  • PDFs with scanned images aren't OCR'd — Pre-OCR with tools like ocrmypdf before upload
  • Huge doc sets without Gemini give noisy embeddings — Optional GEMINI_API_KEY unlocks higher-quality semantic search
Сочетать с: filesystem

Turn an internal wiki export into a RAG source

👤 Teams with markdown-export-friendly wikis ⏱ ~25 min beginner

Когда использовать: You've exported your Notion/Confluence content as Markdown and want AI access.

Поток
  1. Bulk-ingest via process_uploads
    process_uploads on ./wiki-export/ — process every .md.✓ Скопировано
    → Document count per folder
  2. Full-scope search
    search_all_documents: 'deployment runbook' — top 5.✓ Скопировано
    → Ranked list

Итог: A local, private, searchable wiki.

Build a personal research-paper library

👤 Researchers, students ⏱ ~30 min beginner

Когда использовать: You download papers and want them queryable instead of piled in Downloads/

Поток
  1. Drop PDFs in
    Upload all PDFs in ~/Papers/ to the documentation server.✓ Скопировано
    → Papers chunked and indexed
  2. Ask across corpus
    search_documents: 'attention variants with lower quadratic cost' — return authors + years.✓ Скопировано
    → Cited excerpts

Итог: A local mini-perplexity over your own paper collection.

Комбинации

Сочетайте с другими MCP — эффект x10

documentation-server + filesystem

Automate ingest from a watched folder

Every time a new PDF lands in ~/Papers/Inbox, process_uploads it into the documentation server.✓ Скопировано
documentation-server + swarmvault

Compare: documentation-server is quick-ingest; swarmvault builds a structured wiki

Ingest my research PDFs into both systems; compare retrieval quality on the same query.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
add_document title, content, metadata? Programmatic ingest free (local embeddings)
list_documents (none) See what's indexed free
get_document id Retrieve a specific doc free
delete_document id Pruning free
search_documents query, top_k? Query within a specific doc set free
search_all_documents query, top_k? Global RAG query free
get_context_window chunk_id Expand a narrow hit into broader context free
search_documents_with_ai query One-shot answer synthesis Gemini call (needs key)
process_uploads path?: str Batch import from the uploads folder free

Стоимость и лимиты

Во что обходится

Квота API
None if local; Gemini usage if GEMINI_API_KEY set
Токенов на вызов
Search returns 500-3000 tokens depending on top_k
Деньги
Free; Gemini is paid per-call if enabled
Совет
Skip Gemini for exploratory work — local embeddings are good enough for known-item lookups.

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: GEMINI_API_KEY (optional) in env
Исходящий трафик: Local only unless Gemini is enabled; dashboard on port 3080

Устранение неполадок

Частые ошибки и исправления

Port 3080 in use

Set WEB_PORT env var to another port.

Проверить: lsof -i :3080
PDF parse error

Password-protected or scanned PDFs fail. Remove password or run OCR first.

Проверить: Try a plain PDF
search returns empty

Check documents ingested: list_documents. If empty, re-run process_uploads.

Проверить: list_documents

Альтернативы

mcp-documentation-server в сравнении

АльтернативаКогда использоватьКомпромисс
swarmvaultYou want a structured wiki + knowledge graph, not just searchHeavier; more upfront setup
Cloud RAG (Pinecone, Weaviate)You need team-shared, scalePaid; data leaves your machine
llm-context.pyYou want per-task context, not persistent doc retrievalDifferent problem

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills