/ ディレクトリ / プレイグラウンド / web-scraper
● コミュニティ yfe404 ⚡ 即起動

web-scraper

作者 yfe404 · yfe404/web-scraper

Phased reconnaissance scraper that prefers APIs > sitemaps > HTML, picks the right Apify template (Cheerio vs Playwright), and only escalates anti-detection when signals warrant.

Intelligent web scraping via Claude Code. Runs a 5-phase reconnaissance (Phases 0–5) that tries the cheapest extraction path first: public APIs, sitemap feeds, structured data. Only when those fail does it consider browser automation, and only when protection signals appear does it layer on stealth. Built around TypeScript-first Apify Actor development, with templates (Cheerio for static, Playwright for JS-heavy).

なぜ使うのか

主な機能

ライブデモ

実際の動作

web-scraper-skill.replay ▶ 準備完了
0/0

インストール

クライアントを選択

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config を開く。保存後、アプリを再起動。

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Cursor は Claude Desktop と同じ mcpServers スキーマを使用。プロジェクト設定はグローバルより優先。

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Cline サイドバーの MCP Servers アイコンをクリックし、"Edit Configuration" を選択。

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "web-scraper-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop と同じ形式。Windsurf を再起動して反映。

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "web-scraper-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/yfe404/web-scraper",
        "~/.claude/skills/web-scraper"
      ]
    }
  ]
}

Continue はマップではなくサーバーオブジェクトの配列を使用。

~/.config/zed/settings.json
{
  "context_servers": {
    "web-scraper-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/yfe404/web-scraper",
          "~/.claude/skills/web-scraper"
        ]
      }
    }
  }
}

context_servers に追加。保存時に Zed がホットリロード。

claude mcp add web-scraper-skill -- git clone https://github.com/yfe404/web-scraper ~/.claude/skills/web-scraper

ワンライナー。claude mcp list で確認、claude mcp remove で削除。

ユースケース

実用的な使い方: web-scraper

Scrape a static listing site into a structured dataset

👤 Data engineers pulling public data (directories, price lists, public records) ⏱ ~45 min intermediate

使うタイミング: You need a dataset from a public site that doesn't have an API.

前提条件
  • Skill installed — git clone https://github.com/yfe404/web-scraper ~/.claude/skills/web-scraper
  • Node 20 for Apify Actors — nvm install 20
フロー
  1. Let the skill do recon
    Use web-scraper. Target: https://example.com/listings. I want name + URL + category. Recon first — tell me the cheapest extraction path.✓ コピーしました
    → Skill reports: 'sitemap.xml available, use Cheerio'
  2. Scaffold the Apify Actor
    Scaffold a TypeScript Apify Cheerio actor for that extraction.✓ コピーしました
    → Actor tree + main.ts ready to run
  3. Run and iterate
    Run locally on 10 pages; tighten selectors if needed.✓ コピーしました
    → Clean JSON output

結果: An Apify Actor you can deploy for scheduled scrapes.

注意点
  • Jumping to Playwright when Cheerio would do — Trust the recon — headful browsers 10x the cost unnecessarily
組み合わせ: apify · filesystem

Discover and use a site's undocumented JSON API instead of HTML

👤 Scraper devs who want reliability ⏱ ~30 min intermediate

使うタイミング: The page is a SPA and HTML is gross, but the XHR calls are clean JSON.

フロー
  1. Run API-discovery phase
    Use web-scraper phase 1 — API discovery on https://example.com/app. Enumerate XHR/fetch endpoints.✓ コピーしました
    → List of endpoints with observed payloads
  2. Build the JSON-based actor
    Generate an actor that hits those endpoints directly with auth as needed.✓ コピーしました
    → Lightweight fetch-based actor

結果: A far more stable scrape than HTML parsing.

注意点
  • Private/session-auth APIs that break when token rotates — Plan token-refresh logic or fall back to browser flow

組み合わせ

他のMCPと組み合わせて10倍の力を

web-scraper-skill + apify

Deploy the scaffolded actor to Apify for scheduled runs

Deploy this actor to my Apify account and schedule it daily.✓ コピーしました
web-scraper-skill + filesystem

Keep the actor code in-repo alongside the consuming app

Scaffold into scrapers/ and commit with the main project.✓ コピーしました

ツール

このMCPが提供する機能

ツール入力呼び出すタイミングコスト
recon url Always first 0
scaffold_actor template (cheerio|playwright), target After recon picks template 0
record_session url Debugging dynamic sites 0
run_local actor path, limit Iteration phase 0

コストと制限

運用コスト

APIクォータ
Apify has its own compute + proxy quotas
呼び出しあたりのトークン
Moderate — scaffold and iteration loops
金額
Free skill; Apify costs separate
ヒント
Prefer Cheerio — one of the cheapest run profiles on Apify

セキュリティ

権限、シークレット、影響範囲

最小スコープ: Apify API token with actor:read + actor:write
認証情報の保管: APIFY_TOKEN in env
データ送信先: Whatever sites you target + Apify platform

トラブルシューティング

よくあるエラーと対処法

Cheerio returns empty selectors

Content is JS-rendered — rerun recon, expect Playwright template

Playwright times out

Bump navigation timeout; consider waiting for specific selector instead of networkidle

403 / bot-block page

Stop and reconsider. This is the legit signal to re-check ToS, not a cue to escalate stealth.

代替案

web-scraper 他との比較

代替案代わりに使う場面トレードオフ
Direct Apify consoleYou already know which template you needNo recon phase
firecrawlYou just need markdown of a page, not structured extractionNo actor scaffolding

その他

リソース

📖 GitHub の公式 README を読む

🐙 オープンな issue を見る

🔍 400以上のMCPサーバーとSkillsを見る