/ Directory / Playground / web-scraper
● Community yfe404 ⚡ Instant

web-scraper

by yfe404 · yfe404/web-scraper

Phased reconnaissance scraper that prefers APIs > sitemaps > HTML, picks the right Apify template (Cheerio vs Playwright), and only escalates anti-detection when signals warrant.

Intelligent web scraping via Claude Code. Runs a 5-phase reconnaissance (Phases 0–5) that tries the cheapest extraction path first: public APIs, sitemap feeds, structured data. Only when those fail does it consider browser automation, and only when protection signals appear does it layer on stealth. Built around TypeScript-first Apify Actor development, with templates (Cheerio for static, Playwright for JS-heavy).

Why use it

Key features

Live Demo

What it looks like in practice

web-scraper-skill.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "web-scraper-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "web-scraper-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/yfe404/web-scraper",
          "~/.claude/skills/web-scraper"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add web-scraper-skill -- git clone https://github.com/yfe404/web-scraper ~/.claude/skills/web-scraper

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use web-scraper

Scrape a static listing site into a structured dataset

👤 Data engineers pulling public data (directories, price lists, public records) ⏱ ~45 min intermediate

When to use: You need a dataset from a public site that doesn't have an API.

Prerequisites
  • Skill installed — git clone https://github.com/yfe404/web-scraper ~/.claude/skills/web-scraper
  • Node 20 for Apify Actors — nvm install 20
Flow
  1. Let the skill do recon
    Use web-scraper. Target: https://example.com/listings. I want name + URL + category. Recon first — tell me the cheapest extraction path.✓ Copied
    → Skill reports: 'sitemap.xml available, use Cheerio'
  2. Scaffold the Apify Actor
    Scaffold a TypeScript Apify Cheerio actor for that extraction.✓ Copied
    → Actor tree + main.ts ready to run
  3. Run and iterate
    Run locally on 10 pages; tighten selectors if needed.✓ Copied
    → Clean JSON output

Outcome: An Apify Actor you can deploy for scheduled scrapes.

Pitfalls
  • Jumping to Playwright when Cheerio would do — Trust the recon — headful browsers 10x the cost unnecessarily
Combine with: apify · filesystem

Discover and use a site's undocumented JSON API instead of HTML

👤 Scraper devs who want reliability ⏱ ~30 min intermediate

When to use: The page is a SPA and HTML is gross, but the XHR calls are clean JSON.

Flow
  1. Run API-discovery phase
    Use web-scraper phase 1 — API discovery on https://example.com/app. Enumerate XHR/fetch endpoints.✓ Copied
    → List of endpoints with observed payloads
  2. Build the JSON-based actor
    Generate an actor that hits those endpoints directly with auth as needed.✓ Copied
    → Lightweight fetch-based actor

Outcome: A far more stable scrape than HTML parsing.

Pitfalls
  • Private/session-auth APIs that break when token rotates — Plan token-refresh logic or fall back to browser flow

Combinations

Pair with other MCPs for X10 leverage

web-scraper-skill + apify

Deploy the scaffolded actor to Apify for scheduled runs

Deploy this actor to my Apify account and schedule it daily.✓ Copied
web-scraper-skill + filesystem

Keep the actor code in-repo alongside the consuming app

Scaffold into scrapers/ and commit with the main project.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
recon url Always first 0
scaffold_actor template (cheerio|playwright), target After recon picks template 0
record_session url Debugging dynamic sites 0
run_local actor path, limit Iteration phase 0

Cost & Limits

What this costs to run

API quota
Apify has its own compute + proxy quotas
Tokens per call
Moderate — scaffold and iteration loops
Monetary
Free skill; Apify costs separate
Tip
Prefer Cheerio — one of the cheapest run profiles on Apify

Security

Permissions, secrets, blast radius

Minimum scopes: Apify API token with actor:read + actor:write
Credential storage: APIFY_TOKEN in env
Data egress: Whatever sites you target + Apify platform

Troubleshooting

Common errors and fixes

Cheerio returns empty selectors

Content is JS-rendered — rerun recon, expect Playwright template

Playwright times out

Bump navigation timeout; consider waiting for specific selector instead of networkidle

403 / bot-block page

Stop and reconsider. This is the legit signal to re-check ToS, not a cue to escalate stealth.

Alternatives

web-scraper vs others

AlternativeWhen to use it insteadTradeoff
Direct Apify consoleYou already know which template you needNo recon phase
firecrawlYou just need markdown of a page, not structured extractionNo actor scaffolding

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills