/ Directory / Playground / Cloudflare
● Official cloudflare 🔑 Needs your key

Cloudflare

by cloudflare · cloudflare/mcp-server-cloudflare

Official Cloudflare MCP — deploy Workers, query D1, manage R2 and KV, read logs and analytics, all from chat.

Cloudflare's official MCP (actually a family, served remotely at *.mcp.cloudflare.com). Covers Workers deploys and logs, D1 SQL, KV/R2 storage, DNS zones, and Radar analytics. OAuth-based — no manual API token juggling.

Why use it

Key features

Live Demo

What it looks like in practice

cloudflare.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "cloudflare": {
      "command": "npx",
      "args": [
        "-y",
        "@cloudflare/mcp-server-cloudflare"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "cloudflare": {
      "command": "npx",
      "args": [
        "-y",
        "@cloudflare/mcp-server-cloudflare"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "cloudflare": {
      "command": "npx",
      "args": [
        "-y",
        "@cloudflare/mcp-server-cloudflare"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "cloudflare": {
      "command": "npx",
      "args": [
        "-y",
        "@cloudflare/mcp-server-cloudflare"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "cloudflare",
      "command": "npx",
      "args": [
        "-y",
        "@cloudflare/mcp-server-cloudflare"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "cloudflare": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@cloudflare/mcp-server-cloudflare"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add cloudflare -- npx -y @cloudflare/mcp-server-cloudflare

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Cloudflare

Debug a Worker that's throwing 500s in production

👤 Cloudflare Workers developers on call ⏱ ~15 min intermediate

When to use: Your Worker's error rate spiked. You want logs, recent deploys, and a diff of what changed — without opening the dashboard.

Prerequisites
  • Cloudflare account OAuth-connected to your MCP client — First tool call triggers OAuth; grant the 'Workers Observability' and 'Workers Bindings' scopes
Flow
  1. Tail recent Worker logs filtered by error
    Tail logs for Worker 'api-edge' in the last 15 minutes. Filter to status >= 500. Group by the first 100 chars of the error message.✓ Copied
    → Top error templates with counts and timestamps
  2. List recent deployments
    List the last 5 deployments of 'api-edge'. Show deploy time, author, and the version hash.✓ Copied
    → Deploy timeline — correlate with error onset
  3. Roll back if needed
    The error spike starts after the deploy at 14:22. Roll 'api-edge' back to the previous version. Ask me before confirming.✓ Copied
    → Confirmation prompt before destructive action

Outcome: A restored production Worker, with a clear 'deploy X caused errors Y' postmortem note.

Pitfalls
  • Log tail is real-time only; can miss a burst that already passed — For historical windows, use the Logpush or Analytics Engine MCP tools instead of tail
  • Rollback doesn't migrate D1/KV state — If the bad deploy ran migrations, rolling the Worker alone isn't enough — you may need a D1 restore too
Combine with: github · sentry

Run ad-hoc analytics against a D1 database

👤 Developers using D1 for app data ⏱ ~10 min beginner

When to use: You want signup-conversion or usage stats from your D1, without writing a dashboard.

Flow
  1. Find the right database and list schema
    List my D1 databases. For the one called 'prod-app', show all tables and their columns.✓ Copied
    → Database inventory plus schemas
  2. Run the analytics query
    In D1 'prod-app', count users who signed up in the last 30 days grouped by week. Show only users who have at least one event in the events table.✓ Copied
    → Per-week counts, valid SQL
  3. Iterate
    Break it down further by signup source. Which source has the best 7-day activation rate?✓ Copied
    → Per-source comparison with rates

Outcome: Decision-ready numbers with the SQL shown.

Pitfalls
  • D1 has per-query row and execution-time limits — For large aggregations, pre-aggregate into a summary table on a schedule instead of scanning raw events each time
Combine with: notion

Audit and clean up a bloated KV namespace

👤 Engineers whose Workers cache has drifted ⏱ ~20 min intermediate

When to use: Your KV bill is up; you suspect stale keys or misconfigured TTLs.

Flow
  1. Survey the namespace
    For KV namespace 'session-cache', list the first 1000 keys. Sample 10 values and tell me their structure.✓ Copied
    → Key pattern distribution, sample value shapes
  2. Identify stale entries
    For keys matching session:*, how many haven't been accessed in >30 days? (Use metadata if present; otherwise sample and check timestamps in values.)✓ Copied
    → Stale key estimate plus criteria used
  3. Delete safely
    Delete keys matching session:expired:* in batches of 100. Show me the first batch before proceeding.✓ Copied
    → Batch preview before any deletion

Outcome: A cleaner KV namespace with lower storage cost.

Pitfalls
  • KV is eventually consistent — deletes can appear to un-delete briefly from edge POPs — Don't rely on immediate consistency after bulk delete; verify state after a minute

Review DNS records before a domain migration

👤 Ops engineers migrating domains between providers ⏱ ~15 min beginner

When to use: You're about to change nameservers and you want to inventory every record so you don't drop MX, DMARC, or a forgotten subdomain.

Flow
  1. Dump every record
    List all DNS records for zone example.com. Group by type. Include priorities for MX and weights for SRV.✓ Copied
    → Complete record inventory
  2. Flag critical records
    Highlight any that would break email (MX, SPF in TXT, DKIM, DMARC), plus any A/AAAA/CNAME pointing at third-party services (Stripe, HubSpot, status page).✓ Copied
    → Critical-record list with rationale
  3. Produce a migration checklist
    Turn this into a checklist I can run on the new provider — each record with its destination, TTL, and 'test after migration' step.✓ Copied
    → Copy-paste runbook

Outcome: A migration-day runbook that leaves no record behind.

Pitfalls
  • Forgotten DKIM records break email silently 24h later — Specifically list every _domainkey.<selector> record — they're the easiest to miss
Combine with: filesystem

Check Cloudflare Radar for an internet incident affecting users

👤 Incident responders, support leads ⏱ ~10 min intermediate

When to use: Users are reporting your site is down in Brazil. Could be you, could be the internet.

Flow
  1. Query Radar for the country
    Cloudflare Radar: is there any notable internet disruption in Brazil in the last 6 hours? Include BGP anomalies, ISP outages, attack traffic.✓ Copied
    → Known events list or 'nothing anomalous'
  2. Cross-check with your traffic
    For my zone example.com, traffic from BR in the last 6 hours — volume, HTTP status breakdown, top user agents.✓ Copied
    → BR-specific traffic profile
  3. Conclude
    Based on Radar + my traffic, is this a general Brazil-internet issue or specific to my site?✓ Copied
    → Clear verdict with the supporting evidence

Outcome: A defensible 'not our fault, it's <ISP X> issue' or 'it IS our fault, here's what' answer.

Pitfalls
  • Radar data has ~1h lag — For ultra-fresh incidents, pair with your own RUM data
Combine with: sentry

Combinations

Pair with other MCPs for X10 leverage

cloudflare + github

Correlate a Workers rollback with the GitHub PR that introduced the regression

Worker 'api-edge' is failing since deploy at 14:22. Find the GitHub PR whose merge corresponds to that deploy and summarize its changes.✓ Copied
cloudflare + sentry

Sentry reports Worker-origin errors; Cloudflare MCP pulls the Worker-side logs for the same requestId

Sentry issue EDGE-441 has CF-Ray 8abc123. Tail Worker 'api-edge' for that ray and show the matching log line.✓ Copied
cloudflare + filesystem

Edit Worker source locally, deploy via wrangler, verify via Cloudflare MCP logs

Fix the bug in src/index.ts at line 44, deploy, then tail 'api-edge' logs to confirm no more 500s.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
workers_tail script_name: str, filter?: object Real-time Worker log tailing free (within plan limits)
workers_list_deployments script_name: str Review recent versions of a Worker free
workers_rollback script_name: str, version_id: str Revert to a prior version — gated, destructive free
d1_list_databases none Inventory your D1 databases free
d1_query database_id: str, sql: str, params?: [] Run SELECT or mutating SQL — writes require explicit prompt confirmation D1 pricing per rows read/written
kv_list_keys namespace_id: str, prefix?: str, limit?: int, cursor?: str Enumerate keys for audit KV read pricing
kv_get_value / kv_put_value / kv_delete namespace_id, key, value?, ttl? Read/write/delete specific keys KV op pricing
r2_list_buckets / r2_list_objects bucket?, prefix? R2 inventory R2 read ops
r2_get_object / r2_put_object bucket, key, body? Read/write R2 objects R2 op pricing
dns_list_records zone_id: str Zone inventory free
dns_create_record / update_record / delete_record zone_id, record params Zone mutations — gated free
radar_get_http_timeseries / radar_get_attacks timeframe, region filters Global internet health context free
analytics_engine_query sql: str Custom Workers Analytics Engine queries analytics engine read ops

Cost & Limits

What this costs to run

API quota
Per-plan Workers/D1/KV/R2 limits apply; MCP calls count against your regular usage
Tokens per call
200-2000 tokens typical; log tails can be very large — always filter
Monetary
MCP is free; your Cloudflare services bill as usual
Tip
D1 and KV bill by row-read and op-count. Bulk list/scan can be surprisingly expensive — paginate with modest page sizes and stop early.

Security

Permissions, secrets, blast radius

Minimum scopes: Workers:Read D1:Read KV:Read
Credential storage: OAuth tokens managed by your MCP client; no long-lived API tokens in env
Data egress: All calls to Cloudflare API; OAuth flow goes through dash.cloudflare.com
Never grant: Account:Admin unless absolutely needed Zone:DNS:Edit on live zones without a staging test first

Troubleshooting

Common errors and fixes

OAuth flow didn't complete

Your MCP client may not support OAuth redirects. Check its docs (Claude Desktop, Cursor, etc. all support it differently). Try re-connecting the remote MCP from the client UI.

Workers tail disconnects after a minute

Tail sessions are time-bounded. Restart the tail, or for longer windows use Logpush and query via Analytics Engine instead.

D1 query returns 'Too many rows read'

D1 caps rows-scanned per query by plan. Add a WHERE clause that uses an index, or paginate with LIMIT.

permission denied on a DNS tool

The OAuth scope for Zone:DNS:Edit wasn't granted. Reconnect the MCP and approve the additional scope.

Alternatives

Cloudflare vs others

AlternativeWhen to use it insteadTradeoff
AWS MCP (awslabs)You're on AWS, not CloudflareDifferent cloud surface; not a drop-in
Vercel MCPYour deploy target is Vercel (edge functions, KV, blobs)Similar remote MCP model; narrower feature set
wrangler CLI directly via shellYou want full wrangler power (config edits, secrets) not just the MCP surfaceNo agent ergonomics; wider blast radius if scripted wrong

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills