/ Verzeichnis / Playground / Apify
● Offiziell apify 🔑 Eigener Schlüssel nötig

Apify

von apify · apify/apify-mcp-server

Tap 3000+ pre-built Actors on Apify to scrape Google, Amazon, LinkedIn, TikTok, Maps and more — no custom scraping code to maintain.

Apify's official MCP exposes the Apify Actor marketplace as callable tools. Instead of writing your own scraper for each site, you pick an existing battle-tested Actor, pass inputs, and stream back structured JSON. Best for niche targets (Google Maps listings, Amazon products, Twitter profiles) where a generic scraper would need constant maintenance.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

apify.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "apify",
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "apify": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@apify/actors-mcp-server"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add apify -- npx -y @apify/actors-mcp-server

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: Apify

Scrape Google Maps listings for local-business lead generation

👤 Sales / SDR teams building territory lists ⏱ ~15 min beginner

Wann einsetzen: You need 500 'coffee shops in Berlin' with address, phone, website, and rating — and you don't want to be blocked mid-run.

Voraussetzungen
  • Apify account + API token — console.apify.com → Settings → Integrations → API token
  • Enough platform credits on Apify for your run size — Free plan gives $5 credit/month; most Google Maps runs cost ~$1 per 1000 places
Ablauf
  1. Pick the right Actor for your target
    Find the best-maintained Apify Actor for scraping Google Maps places. Prefer one with >5 stars and recent updates.✓ Kopiert
    → Actor slug like compass/crawler-google-places with its input schema
  2. Run it with your query
    Run that Actor with searchStringsArray=['coffee shop Berlin'], maxCrawledPlacesPerSearch=500, language='en'. Wait for completion.✓ Kopiert
    → Run status SUCCEEDED with a dataset id
  3. Pull and clean the dataset
    Get the dataset items. Keep only name, address, phone, website, rating, reviewsCount. Drop places without a phone. Output as CSV.✓ Kopiert
    → CSV of 400–500 cleaned leads

Ergebnis: A de-duped lead list ready for CRM import, typically $1–3 in Apify credits.

Fallstricke
  • Running the wrong Actor — many copycats exist with worse reliability — Filter by usage count and last-updated in the Apify store; stick to the top 3 for a target
  • Massive datasets blow out your context window when returned inline — Ask Claude to page through items (limit+offset) or save to filesystem first, then summarize
Kombinieren mit: filesystem · postgres

Track Amazon product prices and stock status on a schedule

👤 E-commerce, affiliate marketers, competitive pricing teams ⏱ ~20 min intermediate

Wann einsetzen: You want a daily price+stock snapshot for 200 ASINs without babysitting a scraper.

Voraussetzungen
  • List of ASINs or product URLs — CSV of URLs like https://www.amazon.com/dp/B0XXXXXX
Ablauf
  1. Call the Amazon Product Scraper Actor
    Run Actor junglee/amazon-crawler with urls=<my list>, maxReviews=0, scrapeProductDetails=true.✓ Kopiert
    → Run finishes with a dataset of products
  2. Normalize price and stock
    From the dataset, extract asin, title, price, currency, in_stock (bool), seller. Flag any asin where price dropped vs my last snapshot [paste].✓ Kopiert
    → Per-ASIN current vs prior comparison
  3. Schedule it daily
    Create a daily Apify schedule for this Actor with the same inputs. Name it 'amazon-price-tracker-<category>'.✓ Kopiert
    → Schedule created; next run time shown

Ergebnis: A recurring price/stock watch with ~$0.30/day cost for 200 ASINs.

Fallstricke
  • Amazon aggressively throttles even with residential proxies — runs can partially fail — Enable Actor retries and accept that 5–10% of items may be missing; re-run failed asins in a small batch
Kombinieren mit: postgres · notion

Harvest recent posts from a public Twitter/X or TikTok profile

👤 Social-listening analysts, content researchers ⏱ ~20 min intermediate

Wann einsetzen: You track a public figure or brand and want their last 30 days of posts as structured data for analysis.

Voraussetzungen
  • Profile URL(s) of the target account — Public profile link only — do not attempt private or logged-in scraping
Ablauf
  1. Pick a reputable Twitter/TikTok Actor
    Find the top Apify Actor for fetching public tweets from a handle. Show pricing per 1000 tweets.✓ Kopiert
    → Actor shortlist with price-per-1k numbers
  2. Run for each target
    Run it for handles [list] with maxTweets=300 and start_date=30 days ago.✓ Kopiert
    → Dataset with tweets + engagement counts
  3. Summarize what changed in tone/topics
    Cluster these posts into 5 topics and show engagement averages per topic. Call out any sharp rise in one topic.✓ Kopiert
    → Topic table + trend commentary

Ergebnis: A structured social-post dataset plus 1-page topical trend summary.

Fallstricke
  • Scraping private/logged-in content violates platform ToS and can break at any moment — Stick to public profiles only; treat partial failures as expected, not as bugs to chase
Kombinieren mit: notion · postgres

Run a large crawl job async and collect results later

👤 Engineers running >10k-page crawls ⏱ ~45 min advanced

Wann einsetzen: Your crawl will take 30 minutes to 6 hours — you don't want the MCP call to block that long.

Ablauf
  1. Kick off the run without waiting
    Start Actor apify/website-content-crawler with startUrls=[...], maxCrawlPages=10000. Return the runId, don't wait.✓ Kopiert
    → runId returned immediately
  2. Poll status periodically
    Check run <runId> status. How many pages done, how many errored, ETA?✓ Kopiert
    → Progress numbers
  3. Stream results when ready
    Run is SUCCEEDED. Page through the dataset 1000 items at a time and save each page to /crawls/<runId>/page-<n>.jsonl.✓ Kopiert
    → Local JSONL files ready for downstream processing

Ergebnis: A large crawl completed without blocking your chat session, and results on disk ready for indexing.

Fallstricke
  • Letting Claude pull the full dataset in one call — will OOM the context — Always page; never request the entire dataset at once
  • Costs balloon on deep crawls with no cap — Set maxCrawlPages AND memory/cpu limits on the Actor before starting
Kombinieren mit: filesystem · qdrant

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

apify + postgres

Scrape via Apify Actor then upsert normalized rows into your product DB

Run the Amazon Actor for my ASIN list, then upsert each result into the product_prices table with today's date.✓ Kopiert
apify + qdrant

Crawl a docs site then embed each page into a vector collection for RAG

Use Website Content Crawler on docs.stripe.com, then embed each page and upsert into the Qdrant stripe_docs collection.✓ Kopiert
apify + filesystem

Persist raw crawl output locally as JSONL before downstream processing

Run the Google Maps Actor for 'dentist Paris', save the raw dataset to /data/leads/paris-dentists.jsonl.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
search-actors query: str, limit?: int Discover which Actor fits your target before running free
get-actor actorId: str Inspect an Actor's input schema and pricing before calling free
call-actor actorId: str, input: object, timeout?: int Run an Actor and wait for it to finish (short runs only) Actor-specific; billed in Apify platform credits
get-dataset-items datasetId: str, limit?: int, offset?: int Page through a completed run's dataset free
get-run runId: str Poll status of a long-running job free

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
Apify API is generous; Actors themselves are metered in platform credits
Tokens pro Aufruf
Actor input+output responses are usually 500–3000 tokens; large datasets should be paged
Kosten in €
Free plan: $5 platform credit/month. Paid from $49/mo for $49+ credits. Per-Actor pricing varies ($0.25–$5 per 1000 results is typical).
Tipp
Always inspect Actor pricing via get-actor before calling; set maxResults/maxCrawlPages on every run to cap spend.

Sicherheit

Rechte, Secrets, Reichweite

Minimale Scopes: Apify API token with default scope
Credential-Speicherung: API token in env var APIFY_TOKEN
Datenabfluss: Calls to api.apify.com; Actors themselves may fetch any public URL you instruct them to
Niemals gewähren: Root/admin tokens if user-scoped tokens suffice

Fehlerbehebung

Häufige Fehler und Lösungen

401 Unauthorized

APIFY_TOKEN missing or revoked. Re-issue at console.apify.com/settings/integrations.

Prüfen: curl -H 'Authorization: Bearer $APIFY_TOKEN' https://api.apify.com/v2/users/me
Actor run FAILED with 'Not enough platform credits'

Top up in Apify console billing or pick a cheaper Actor variant; set maxResults to cap cost next time.

Run succeeds but dataset is empty

Wrong input schema — run get-actor to read the required field names, the Actor likely silently ignored your input.

Timeout waiting for call-actor

Long crawls exceed MCP call timeout; start the run, get the runId, then poll with get-run instead of blocking.

Alternativen

Apify vs. andere

AlternativeWann stattdessenKompromiss
Firecrawl MCPGeneric page-to-markdown scraping across any siteLess specialized for specific targets like Amazon or Maps
Bright Data MCPYou need heavy-duty residential proxies and SERP APIMore expensive; focused on unblocking rather than pre-built Actors
Playwright MCPYou need to script a custom flow (login, multi-step click-through)You write and maintain the scraping logic yourself

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen