/ Annuaire / Playground / mcp-local-rag
● Communauté shinpr ⚡ Instantané

mcp-local-rag

par shinpr · shinpr/mcp-local-rag

Private, local-first RAG — index your PDFs, docs, and code once, then search semantically from any MCP client. No API keys, no cloud, no data leaving your machine.

mcp-local-rag runs entirely offline after a ~90MB model download. Ingest PDF/DOCX/TXT/MD/HTML files or raw HTML strings, then query with combined semantic + keyword search. Ideal for personal knowledge bases, confidential documents, and working on flights.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

local-rag.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "local-rag",
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "local-rag": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mcp-local-rag"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add local-rag -- npx -y mcp-local-rag

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : mcp-local-rag

Build a private RAG over your downloaded research papers and PDFs

👤 Researchers, students, knowledge workers ⏱ ~30 min beginner

Quand l'utiliser : You've hoarded hundreds of PDFs in ~/Documents/papers and want to actually use them — 'what did that paper say about attention decay?'

Prérequis
  • PDFs or docs on disk — Any folder of files — recursive ingest supported
Déroulement
  1. Ingest the folder
    Ingest everything under ~/Documents/papers into local-rag. Skip files larger than 50MB.✓ Copié
    → Per-file ingest log + 'indexed N files' summary
  2. Ask questions
    Across my papers, what do they say about positional encoding in long-context transformers? Cite the source file and page if possible.✓ Copié
    → Synthesized answer with source file citations
  3. Refine search
    Just give me the top 5 passages most relevant to 'ring attention', raw — don't summarize.✓ Copié
    → Ranked passage list

Résultat : Every paper you've ever downloaded is now queryable by topic — permanent upgrade to your reading life.

Pièges
  • Scanned PDFs have no extractable text — Run an OCR pass first (ocrmypdf) before ingesting
  • First index of 1000+ files is slow (CPU embeddings) — Leave it running overnight; incremental re-ingest is fast
Combiner avec : filesystem

Query confidential contracts / HR docs without leaking to any cloud

👤 Legal ops, HR, compliance ⏱ ~20 min intermediate

Quand l'utiliser : Documents are too sensitive for OpenAI/Claude cloud embeddings. You need search but can't send content anywhere.

Déroulement
  1. Ingest
    Ingest /secure/contracts/*.pdf into local-rag.✓ Copié
    → Files indexed locally; confirm no network call was made
  2. Query
    Which contracts have an auto-renewal clause longer than 12 months?✓ Copié
    → List of candidate contracts with the clause quoted

Résultat : Searchable private corpus with nothing leaving the machine.

Pièges
  • Claude answers still go to Anthropic — the embeddings are local but conversation isn't — If answers must also be local, run with a local LLM via Ollama or LM Studio instead of cloud Claude
Combiner avec : filesystem

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

local-rag + filesystem

Watch a folder, re-ingest files when they change

Every time a file under ~/Notes changes, re-ingest it into local-rag.✓ Copié
local-rag + firecrawl

Scrape a docs site then feed to local-rag for offline querying

Crawl docs.example.com, save each page as Markdown, then ingest all of them into local-rag.✓ Copié
local-rag + playwright

Capture JS-rendered pages and ingest their extracted text

Open this SPA, grab the rendered HTML, ingest_data it into local-rag with the URL as source.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
ingest_file path: str | path[] Add one or more files to the index CPU only
ingest_data html: str, source_url?: str Add a raw HTML blob — useful after scraping CPU only
query_documents query: str, top_k?: int Main retrieval call — use before answering user questions free
list_files See what's indexed free
delete_file path: str Remove a stale/irrelevant file from the index free
status Sanity check index size free

Coût et limites

Coût d'exécution

Quota d'API
None — all local
Tokens par appel
Query results 500-3000 tokens depending on top_k
Monétaire
Free. One-time ~90MB model download.
Astuce
Set top_k to 5-8 for most questions; going higher wastes tokens without improving answers.

Sécurité

Permissions, secrets, portée

Stockage des identifiants : None — no API keys
Sortie de données : Zero after model download. Your docs never leave the machine.

Dépannage

Erreurs courantes et correctifs

First query is slow / seems to hang

Embedding model is downloading on first run (~90MB). Subsequent calls are fast.

Vérifier : Check ~/.cache/mcp-local-rag for the model file
PDF ingest returns 0 chunks

PDF is likely scanned (image-only). Run ocrmypdf input.pdf output.pdf first.

Vérifier : pdftotext input.pdf -
Results feel irrelevant

Pure semantic search struggles with short queries. Add more keywords. The hybrid search boosts them already.

Out of memory on large PDFs

Split the PDF first, or raise Node heap: NODE_OPTIONS=--max-old-space-size=8192

Alternatives

mcp-local-rag vs autres

AlternativeQuand l'utiliserCompromis
Chroma MCP / Qdrant MCPYou want a real vector DB with multi-user, scaling, metadata filtersMore setup, usually requires a running server
OpenAI Assistants file_searchYou're OK sending documents to OpenAI's cloudNot local, costs per token, but zero setup and more accurate
ChatGPT Projects / Claude Projects file uploadSmall document set (<20 files) and you use the hosted chatNot an MCP; can't be scripted

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills