/ Directory / Playground / mcp-local-rag
● Community shinpr ⚡ Instant

mcp-local-rag

by shinpr · shinpr/mcp-local-rag

Private, local-first RAG — index your PDFs, docs, and code once, then search semantically from any MCP client. No API keys, no cloud, no data leaving your machine.

mcp-local-rag runs entirely offline after a ~90MB model download. Ingest PDF/DOCX/TXT/MD/HTML files or raw HTML strings, then query with combined semantic + keyword search. Ideal for personal knowledge bases, confidential documents, and working on flights.

Why use it

Key features

Live Demo

What it looks like in practice

local-rag.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "local-rag",
      "command": "npx",
      "args": [
        "-y",
        "mcp-local-rag"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "local-rag": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "mcp-local-rag"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add local-rag -- npx -y mcp-local-rag

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use mcp-local-rag

Build a private RAG over your downloaded research papers and PDFs

👤 Researchers, students, knowledge workers ⏱ ~30 min beginner

When to use: You've hoarded hundreds of PDFs in ~/Documents/papers and want to actually use them — 'what did that paper say about attention decay?'

Prerequisites
  • PDFs or docs on disk — Any folder of files — recursive ingest supported
Flow
  1. Ingest the folder
    Ingest everything under ~/Documents/papers into local-rag. Skip files larger than 50MB.✓ Copied
    → Per-file ingest log + 'indexed N files' summary
  2. Ask questions
    Across my papers, what do they say about positional encoding in long-context transformers? Cite the source file and page if possible.✓ Copied
    → Synthesized answer with source file citations
  3. Refine search
    Just give me the top 5 passages most relevant to 'ring attention', raw — don't summarize.✓ Copied
    → Ranked passage list

Outcome: Every paper you've ever downloaded is now queryable by topic — permanent upgrade to your reading life.

Pitfalls
  • Scanned PDFs have no extractable text — Run an OCR pass first (ocrmypdf) before ingesting
  • First index of 1000+ files is slow (CPU embeddings) — Leave it running overnight; incremental re-ingest is fast
Combine with: filesystem

Query confidential contracts / HR docs without leaking to any cloud

👤 Legal ops, HR, compliance ⏱ ~20 min intermediate

When to use: Documents are too sensitive for OpenAI/Claude cloud embeddings. You need search but can't send content anywhere.

Flow
  1. Ingest
    Ingest /secure/contracts/*.pdf into local-rag.✓ Copied
    → Files indexed locally; confirm no network call was made
  2. Query
    Which contracts have an auto-renewal clause longer than 12 months?✓ Copied
    → List of candidate contracts with the clause quoted

Outcome: Searchable private corpus with nothing leaving the machine.

Pitfalls
  • Claude answers still go to Anthropic — the embeddings are local but conversation isn't — If answers must also be local, run with a local LLM via Ollama or LM Studio instead of cloud Claude
Combine with: filesystem

Combinations

Pair with other MCPs for X10 leverage

local-rag + filesystem

Watch a folder, re-ingest files when they change

Every time a file under ~/Notes changes, re-ingest it into local-rag.✓ Copied
local-rag + firecrawl

Scrape a docs site then feed to local-rag for offline querying

Crawl docs.example.com, save each page as Markdown, then ingest all of them into local-rag.✓ Copied
local-rag + playwright

Capture JS-rendered pages and ingest their extracted text

Open this SPA, grab the rendered HTML, ingest_data it into local-rag with the URL as source.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
ingest_file path: str | path[] Add one or more files to the index CPU only
ingest_data html: str, source_url?: str Add a raw HTML blob — useful after scraping CPU only
query_documents query: str, top_k?: int Main retrieval call — use before answering user questions free
list_files See what's indexed free
delete_file path: str Remove a stale/irrelevant file from the index free
status Sanity check index size free

Cost & Limits

What this costs to run

API quota
None — all local
Tokens per call
Query results 500-3000 tokens depending on top_k
Monetary
Free. One-time ~90MB model download.
Tip
Set top_k to 5-8 for most questions; going higher wastes tokens without improving answers.

Security

Permissions, secrets, blast radius

Credential storage: None — no API keys
Data egress: Zero after model download. Your docs never leave the machine.

Troubleshooting

Common errors and fixes

First query is slow / seems to hang

Embedding model is downloading on first run (~90MB). Subsequent calls are fast.

Verify: Check ~/.cache/mcp-local-rag for the model file
PDF ingest returns 0 chunks

PDF is likely scanned (image-only). Run ocrmypdf input.pdf output.pdf first.

Verify: pdftotext input.pdf -
Results feel irrelevant

Pure semantic search struggles with short queries. Add more keywords. The hybrid search boosts them already.

Out of memory on large PDFs

Split the PDF first, or raise Node heap: NODE_OPTIONS=--max-old-space-size=8192

Alternatives

mcp-local-rag vs others

AlternativeWhen to use it insteadTradeoff
Chroma MCP / Qdrant MCPYou want a real vector DB with multi-user, scaling, metadata filtersMore setup, usually requires a running server
OpenAI Assistants file_searchYou're OK sending documents to OpenAI's cloudNot local, costs per token, but zero setup and more accurate
ChatGPT Projects / Claude Projects file uploadSmall document set (<20 files) and you use the hosted chatNot an MCP; can't be scripted

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills