/ Verzeichnis / Playground / deep-research
● Community u14app ⚡ Sofort

deep-research

von u14app · u14app/deep-research

Generate a full deep-research report in ~2 minutes using your own LLM keys — one tool call, multi-step web research inside the server.

u14app/deep-research is a research agent exposed as an MCP server. You bring your own model (Gemini, OpenAI, Claude, Deepseek, Ollama, etc.) and optionally a search provider key (Tavily, Firecrawl, Exa, Brave). A single tool call runs planning, searching, and writing — returning a cited markdown report.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

deep-research.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "deep-research",
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "deep-research": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "deep-research"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add deep-research -- npx -y deep-research

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: deep-research

How to produce a competitor market scan in 2 minutes

👤 Founders, PMs, strategy analysts ⏱ ~10 min beginner

Wann einsetzen: You need a sourced landscape of a space (say, 'open-source vector DBs') and a blank doc is staring at you.

Voraussetzungen
  • An LLM API key (MCP_AI_PROVIDER + provider key) — Get a Gemini key at aistudio.google.com or an OpenAI key at platform.openai.com
  • Optional search key (Tavily or Firecrawl) — tavily.com or firecrawl.dev — free tier is enough for a few reports
Ablauf
  1. Call the research tool with a focused topic
    Run deep research on 'managed vector databases for RAG: pricing, ingestion scale, and hybrid search support as of 2026'. Target 1500 words, include citations.✓ Kopiert
    → Long-running call returns a structured report with links
  2. Ask for a comparison table
    From the report, produce a markdown table: provider | free tier | max vectors | hybrid search | notes.✓ Kopiert
    → Clean table you can paste anywhere
  3. Drill into one competitor
    Run a second deep research pass focused only on Qdrant's pricing changes since 2024.✓ Kopiert
    → Tighter, more specific report

Ergebnis: A 1-2k word cited briefing you can send to leadership same-day.

Fallstricke
  • Default 2-minute timeout on some MCP clients kills the call — Raise client timeout to 600s — this is a long-running tool
  • Citations can hallucinate if search provider returns nothing — Use Tavily or Firecrawl rather than model-native search for higher grounding
Kombinieren mit: firecrawl · notion

How to produce a technical decision memo with sources

👤 Staff engineers, architects ⏱ ~15 min intermediate

Wann einsetzen: You have to pick between two technologies and need a defensible write-up.

Ablauf
  1. Frame the question sharply
    Deep research: 'Should a Rails 7 monolith migrate to sidekiq-pro or to a dedicated Go worker service in 2026?' — weigh ops cost, failure modes, community support. Return 1200 words with citations.✓ Kopiert
    → Sourced memo with pros/cons per option
  2. Ask for the contrarian take
    Now rebut the memo — what would a skeptic say?✓ Kopiert
    → Counter-arguments grounded in the sources

Ergebnis: A decision memo + counter-memo, ready for an architecture review.

Fallstricke
  • Report goes stale fast — 2024 info can contradict 2026 reality — Pin queries with 'as of 2026' and re-run before publishing
Kombinieren mit: notion · github

How to draft a literature review section for a paper

👤 Researchers, grad students ⏱ ~20 min intermediate

Wann einsetzen: You know the field but want a structured overview + citations to check against.

Ablauf
  1. Define scope and timespan
    Deep research on 'mechanistic interpretability of transformer attention heads 2022-2026'. Organize by theme (circuits, superposition, SAE). Cite arXiv.✓ Kopiert
    → Themed review with arXiv links
  2. Cross-check with paper-search
    Use paper-search MCP to find any major papers missing from the report.✓ Kopiert
    → Gap list

Ergebnis: A draft section with sources you still need to verify by reading directly.

Fallstricke
  • Do not cite what Claude produced without reading the source — Treat output as a starting bibliography — read every paper you cite
Kombinieren mit: paper-search

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

deep-research + firecrawl

Use firecrawl for higher-quality web retrieval before synthesizing

Using firecrawl as the search backend, deep research 'AI coding agents benchmarks Q1 2026'.✓ Kopiert
deep-research + notion

Drop the finished report into a Notion database for team review

After deep research finishes, create a Notion page titled with today's date under 'Research' and paste the full markdown.✓ Kopiert
deep-research + paper-search

Combine web research with arXiv coverage for academic topics

Do a deep research report on constitutional AI, then use paper-search to add any 2025-2026 arXiv papers missing from the sources.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
deep_research topic: str, depth?: 'shallow'|'standard'|'deep', length_words?: int, language?: str When you want a sourced report, not a chat reply Many LLM + search calls — plan for $0.05-$0.50 per report depending on model

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Bounded by your chosen LLM + search provider quotas
Tokens pro Aufruf
A single report consumes 50k-300k tokens on the thinking model across planning + synthesis
Kosten in €
Bring-your-own keys — $0.05-$0.50 per report on Gemini Flash; $1-$5 on Claude Opus
Tipp
Use a cheap planner + expensive writer split: MCP_TASK_MODEL=gemini-flash, MCP_THINKING_MODEL=claude-sonnet. 3-5x cost savings.

Sicherheit

Rechte, Secrets, Reichweite

Minimale Scopes: API keys for the providers you enable
Credential-Speicherung: Env vars (MCP_AI_PROVIDER, provider API keys, search keys, optional ACCESS_PASSWORD)
Datenabfluss: Your prompts go to whichever LLM provider + search provider you configure; the MCP server itself does not phone home
Niemals gewähren: Production billing keys — use a scoped key with a monthly cap

Fehlerbehebung

Häufige Fehler und Lösungen

Client times out at 2 minutes

Raise the MCP client timeout to 600s. This tool is long-running by design.

Missing MCP_AI_PROVIDER

Set MCP_AI_PROVIDER env var to one of: google, openai, anthropic, deepseek, xai, mistral, azure, openrouter, ollama.

Prüfen: env | grep MCP_AI_PROVIDER
Search returns nothing / report is hollow

Switch MCP_SEARCH_PROVIDER from 'model' to 'tavily' or 'firecrawl' and supply the key.

401 from ACCESS_PASSWORD-protected server

Add the password to client config as a header: 'Authorization: Bearer <password>'.

Alternativen

deep-research vs. andere

AlternativeWann stattdessenKompromiss
OpenAI Deep ResearchYou pay for ChatGPT Pro and want zero configNo MCP, no BYO-model, locked to OpenAI
Gemini Deep ResearchYou use Gemini Advanced alreadySame locked-vendor tradeoff
firecrawl MCPYou want raw scraped pages and will synthesize yourselfNo autonomous planner; you orchestrate steps

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen