/ Directory / Playground / deep-research
● Community u14app ⚡ Instant

deep-research

by u14app · u14app/deep-research

Generate a full deep-research report in ~2 minutes using your own LLM keys — one tool call, multi-step web research inside the server.

u14app/deep-research is a research agent exposed as an MCP server. You bring your own model (Gemini, OpenAI, Claude, Deepseek, Ollama, etc.) and optionally a search provider key (Tavily, Firecrawl, Exa, Brave). A single tool call runs planning, searching, and writing — returning a cited markdown report.

Why use it

Key features

Live Demo

What it looks like in practice

deep-research.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "deep-research": {
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "deep-research",
      "command": "npx",
      "args": [
        "-y",
        "deep-research"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "deep-research": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "deep-research"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add deep-research -- npx -y deep-research

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use deep-research

How to produce a competitor market scan in 2 minutes

👤 Founders, PMs, strategy analysts ⏱ ~10 min beginner

When to use: You need a sourced landscape of a space (say, 'open-source vector DBs') and a blank doc is staring at you.

Prerequisites
  • An LLM API key (MCP_AI_PROVIDER + provider key) — Get a Gemini key at aistudio.google.com or an OpenAI key at platform.openai.com
  • Optional search key (Tavily or Firecrawl) — tavily.com or firecrawl.dev — free tier is enough for a few reports
Flow
  1. Call the research tool with a focused topic
    Run deep research on 'managed vector databases for RAG: pricing, ingestion scale, and hybrid search support as of 2026'. Target 1500 words, include citations.✓ Copied
    → Long-running call returns a structured report with links
  2. Ask for a comparison table
    From the report, produce a markdown table: provider | free tier | max vectors | hybrid search | notes.✓ Copied
    → Clean table you can paste anywhere
  3. Drill into one competitor
    Run a second deep research pass focused only on Qdrant's pricing changes since 2024.✓ Copied
    → Tighter, more specific report

Outcome: A 1-2k word cited briefing you can send to leadership same-day.

Pitfalls
  • Default 2-minute timeout on some MCP clients kills the call — Raise client timeout to 600s — this is a long-running tool
  • Citations can hallucinate if search provider returns nothing — Use Tavily or Firecrawl rather than model-native search for higher grounding
Combine with: firecrawl · notion

How to produce a technical decision memo with sources

👤 Staff engineers, architects ⏱ ~15 min intermediate

When to use: You have to pick between two technologies and need a defensible write-up.

Flow
  1. Frame the question sharply
    Deep research: 'Should a Rails 7 monolith migrate to sidekiq-pro or to a dedicated Go worker service in 2026?' — weigh ops cost, failure modes, community support. Return 1200 words with citations.✓ Copied
    → Sourced memo with pros/cons per option
  2. Ask for the contrarian take
    Now rebut the memo — what would a skeptic say?✓ Copied
    → Counter-arguments grounded in the sources

Outcome: A decision memo + counter-memo, ready for an architecture review.

Pitfalls
  • Report goes stale fast — 2024 info can contradict 2026 reality — Pin queries with 'as of 2026' and re-run before publishing
Combine with: notion · github

How to draft a literature review section for a paper

👤 Researchers, grad students ⏱ ~20 min intermediate

When to use: You know the field but want a structured overview + citations to check against.

Flow
  1. Define scope and timespan
    Deep research on 'mechanistic interpretability of transformer attention heads 2022-2026'. Organize by theme (circuits, superposition, SAE). Cite arXiv.✓ Copied
    → Themed review with arXiv links
  2. Cross-check with paper-search
    Use paper-search MCP to find any major papers missing from the report.✓ Copied
    → Gap list

Outcome: A draft section with sources you still need to verify by reading directly.

Pitfalls
  • Do not cite what Claude produced without reading the source — Treat output as a starting bibliography — read every paper you cite
Combine with: paper-search

Combinations

Pair with other MCPs for X10 leverage

deep-research + firecrawl

Use firecrawl for higher-quality web retrieval before synthesizing

Using firecrawl as the search backend, deep research 'AI coding agents benchmarks Q1 2026'.✓ Copied
deep-research + notion

Drop the finished report into a Notion database for team review

After deep research finishes, create a Notion page titled with today's date under 'Research' and paste the full markdown.✓ Copied
deep-research + paper-search

Combine web research with arXiv coverage for academic topics

Do a deep research report on constitutional AI, then use paper-search to add any 2025-2026 arXiv papers missing from the sources.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
deep_research topic: str, depth?: 'shallow'|'standard'|'deep', length_words?: int, language?: str When you want a sourced report, not a chat reply Many LLM + search calls — plan for $0.05-$0.50 per report depending on model

Cost & Limits

What this costs to run

API quota
Bounded by your chosen LLM + search provider quotas
Tokens per call
A single report consumes 50k-300k tokens on the thinking model across planning + synthesis
Monetary
Bring-your-own keys — $0.05-$0.50 per report on Gemini Flash; $1-$5 on Claude Opus
Tip
Use a cheap planner + expensive writer split: MCP_TASK_MODEL=gemini-flash, MCP_THINKING_MODEL=claude-sonnet. 3-5x cost savings.

Security

Permissions, secrets, blast radius

Minimum scopes: API keys for the providers you enable
Credential storage: Env vars (MCP_AI_PROVIDER, provider API keys, search keys, optional ACCESS_PASSWORD)
Data egress: Your prompts go to whichever LLM provider + search provider you configure; the MCP server itself does not phone home
Never grant: Production billing keys — use a scoped key with a monthly cap

Troubleshooting

Common errors and fixes

Client times out at 2 minutes

Raise the MCP client timeout to 600s. This tool is long-running by design.

Missing MCP_AI_PROVIDER

Set MCP_AI_PROVIDER env var to one of: google, openai, anthropic, deepseek, xai, mistral, azure, openrouter, ollama.

Verify: env | grep MCP_AI_PROVIDER
Search returns nothing / report is hollow

Switch MCP_SEARCH_PROVIDER from 'model' to 'tavily' or 'firecrawl' and supply the key.

401 from ACCESS_PASSWORD-protected server

Add the password to client config as a header: 'Authorization: Bearer <password>'.

Alternatives

deep-research vs others

AlternativeWhen to use it insteadTradeoff
OpenAI Deep ResearchYou pay for ChatGPT Pro and want zero configNo MCP, no BYO-model, locked to OpenAI
Gemini Deep ResearchYou use Gemini Advanced alreadySame locked-vendor tradeoff
firecrawl MCPYou want raw scraped pages and will synthesize yourselfNo autonomous planner; you orchestrate steps

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills