/ Directory / Playground / Bright Data
● Official brightdata 🔑 Needs your key

Bright Data

by brightdata · brightdata/brightdata-mcp

Search, scrape, and unblock the web at scale — residential proxies + SERP API + browser automation in one MCP.

Bright Data's official MCP bundles three capabilities: live SERP search results across Google/Bing/DuckDuckGo, scraping of single or bulk URLs through their unblocker/proxy network, and a fleet of pre-built structured scrapers for specific targets (Amazon, LinkedIn public, Instagram public, Zillow, etc.). Credits are metered; use sampling and caching.

Why use it

Key features

Live Demo

What it looks like in practice

brightdata.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "brightdata": {
      "command": "npx",
      "args": [
        "-y",
        "@brightdata/mcp"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "brightdata": {
      "command": "npx",
      "args": [
        "-y",
        "@brightdata/mcp"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "brightdata": {
      "command": "npx",
      "args": [
        "-y",
        "@brightdata/mcp"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "brightdata": {
      "command": "npx",
      "args": [
        "-y",
        "@brightdata/mcp"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "brightdata",
      "command": "npx",
      "args": [
        "-y",
        "@brightdata/mcp"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "brightdata": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@brightdata/mcp"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add brightdata -- npx -y @brightdata/mcp

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Bright Data

Track your keyword rankings on Google across locations

👤 SEO teams ⏱ ~20 min intermediate

When to use: You want daily rank tracking for 50 keywords in US/UK/DE without running your own proxies.

Prerequisites
  • Bright Data API token — brightdata.com → dashboard → API tokens
  • Budget: ~$0.001–$0.003 per SERP query — Credit balance on Bright Data account
Flow
  1. Run the SERP for each kw/country
    For each keyword in [list], run a Google SERP search from country=us. Capture top 10 organic results (url, title, position).✓ Copied
    → Per-keyword ranked list
  2. Locate our domain
    For each result set, find where mydomain.com appears (or 'not in top 10'). Output kw → position.✓ Copied
    → Rank table
  3. Diff vs yesterday
    Compare to yesterday's JSON [paste]. Highlight moves > 3 positions.✓ Copied
    → Daily movers report

Outcome: A daily rank-tracking workflow at ~$0.15/day for 50 keywords, no proxy ops.

Pitfalls
  • Each country/device combo counts as a separate query — Only track what you need; 50 kw × 3 countries × 7 days is 1050 queries/week
Combine with: postgres · notion

Fetch a page that blocks datacenter IPs

👤 Devs whose normal scraping target got behind Cloudflare-Turnstile ⏱ ~15 min intermediate

When to use: fetch / Firecrawl get 403 or an interstitial; you need residential IPs.

Flow
  1. Try once with unlocker
    Fetch <url> via Web Unlocker. Return the rendered HTML + HTTP status.✓ Copied
    → 200 + real HTML
  2. Extract what you need
    From that HTML, extract [list the fields]. Return as JSON.✓ Copied
    → Structured data
  3. Respect the site
    If the page says 'robots.txt disallow' or a clear anti-scrape notice, abort and tell me.✓ Copied
    → Consent-aware fallback

Outcome: The data you need without maintaining a proxy pool.

Pitfalls
  • Unblocker can still fail on hardcore targets (banking, SaaS login pages) — These are intentionally private; pick an official API or a different approach
  • Costs escalate quickly on large crawls — Firecrawl or fetch is cheaper for unprotected sites — only pay Bright Data when you hit a block
Combine with: firecrawl

Pull a structured Amazon product dataset via prebuilt scraper

👤 E-commerce analysts ⏱ ~20 min intermediate

When to use: You want 500 Amazon ASINs with clean title/price/rating/bsr fields, not raw HTML.

Prerequisites
  • List of ASINs or category URLs — CSV or text input
Flow
  1. Kick off the prebuilt Amazon scraper
    Run the Bright Data Amazon product scraper for ASINs [list]. Return a job id.✓ Copied
    → Job id issued
  2. Poll until ready
    Poll the job. When done, fetch the dataset.✓ Copied
    → Full dataset delivered
  3. Cache to avoid re-runs
    Save the dataset to /data/amazon-<date>.jsonl. Flag any ASIN that errored.✓ Copied
    → Persisted dataset + error list

Outcome: A clean, re-runnable Amazon product dataset at ~$X/1000 products (see current pricing).

Pitfalls
  • Public LinkedIn/Instagram scrapers legally vary by region — Stay within public profile data; do not bypass authentication — know your jurisdiction
Combine with: postgres · filesystem

Daily news search for a brand across SERP

👤 PR / comms ⏱ ~15 min beginner

When to use: You want a daily digest of what's being said about your brand in news search.

Flow
  1. Run a Google News SERP
    Google News search for '<brand>' last 24h, country=us. Return top 20 results with source, title, url, snippet.✓ Copied
    → News SERP
  2. Classify sentiment from snippets
    Score each result as positive/neutral/negative based on title+snippet. Flag anything flagged as negative for review.✓ Copied
    → Scored list
  3. Deliver the digest
    Format as a markdown digest: counts by sentiment, negative items with links, top positive items.✓ Copied
    → Digest ready

Outcome: A focused PR digest without scraping individual news sites.

Pitfalls
  • Sentiment from headlines alone is noisy — Only flag as negative if both title and snippet are clearly negative; human-review the flags
Combine with: notion

Combinations

Pair with other MCPs for X10 leverage

brightdata + postgres

Store daily rank-tracking rows for trend analysis

Run SERP for kw list, INSERT into keyword_ranks table with today's date.✓ Copied
brightdata + firecrawl

Cheap-first, unblock-fallback crawling

Try Firecrawl first; if 403/blocked, fall back to Bright Data Unlocker for that URL only.✓ Copied
brightdata + notion

Weekly PR digest posted to Notion

Run brand SERP for the last 7 days, create a Notion page with the digest.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
search_engine engine: 'google'|'bing'|'duckduckgo', query, country?, lang?, device? SERP / rank-tracking workflows ~$0.001–0.003 per query
scrape_as_markdown url Fetch a single page through unblocker as clean markdown 1 credit per page
scrape_as_html url You need raw HTML to parse yourself 1 credit per page
web_data_<target> urls: str[] or params Prebuilt structured scraper (amazon, linkedin, zillow, etc.) per-scraper pricing
scraping_browser_* url, actions Multi-step / JS-heavy flows browser-session pricing

Cost & Limits

What this costs to run

API quota
Bounded by account credits; concurrent requests per plan
Tokens per call
SERP: 500–2000 tokens. Scraped page: 1000–5000 tokens.
Monetary
Pay-as-you-go; typical SERP $0.001–$0.003, unblocker ~$3 per 1000 pages, prebuilt scrapers priced per 1000 records.
Tip
Cache aggressively — most data doesn't change hourly. Use cheaper fetch/Firecrawl for unprotected targets.

Security

Permissions, secrets, blast radius

Minimum scopes: API token with zone access for the relevant products
Credential storage: BRIGHTDATA_API_TOKEN in env
Data egress: All requests through Bright Data's proxy network; they see target URLs and responses
Never grant: Admin-level account tokens for everyday scraping

Troubleshooting

Common errors and fixes

401 Invalid token

BRIGHTDATA_API_TOKEN missing/expired. Regenerate in dashboard.

Verify: curl -H 'Authorization: Bearer $BRIGHTDATA_API_TOKEN' https://api.brightdata.com/zone/list
402 Insufficient credits

Top up account balance or reduce query volume; check dashboard for burn rate.

Scraping job SUCCEEDED but dataset empty

Wrong input schema for the prebuilt scraper. Read the scraper's doc page for required fields.

Target site still blocks despite unlocker

Some sites use advanced fingerprinting; switch to Scraping Browser with stealth, or abandon the target.

Alternatives

Bright Data vs others

AlternativeWhen to use it insteadTradeoff
Firecrawl MCPUnprotected sites, generic scrapingFails on hostile targets
Apify MCPYou want a broader Actor marketplace and cheaper pricing for common targetsProxy network quality varies per Actor
SerpAPI MCPYou only need SERP, not full scrapingNo unblocker / prebuilt scrapers

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills