/ Annuaire / Playground / Apify
● Officiel apify 🔑 Nécessite votre clé

Apify

par apify · apify/apify-mcp-server

Tap 3000+ pre-built Actors on Apify to scrape Google, Amazon, LinkedIn, TikTok, Maps and more — no custom scraping code to maintain.

Apify's official MCP exposes the Apify Actor marketplace as callable tools. Instead of writing your own scraper for each site, you pick an existing battle-tested Actor, pass inputs, and stream back structured JSON. Best for niche targets (Google Maps listings, Amazon products, Twitter profiles) where a generic scraper would need constant maintenance.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

apify.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "apify",
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "apify": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@apify/actors-mcp-server"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add apify -- npx -y @apify/actors-mcp-server

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : Apify

Scrape Google Maps listings for local-business lead generation

👤 Sales / SDR teams building territory lists ⏱ ~15 min beginner

Quand l'utiliser : You need 500 'coffee shops in Berlin' with address, phone, website, and rating — and you don't want to be blocked mid-run.

Prérequis
  • Apify account + API token — console.apify.com → Settings → Integrations → API token
  • Enough platform credits on Apify for your run size — Free plan gives $5 credit/month; most Google Maps runs cost ~$1 per 1000 places
Déroulement
  1. Pick the right Actor for your target
    Find the best-maintained Apify Actor for scraping Google Maps places. Prefer one with >5 stars and recent updates.✓ Copié
    → Actor slug like compass/crawler-google-places with its input schema
  2. Run it with your query
    Run that Actor with searchStringsArray=['coffee shop Berlin'], maxCrawledPlacesPerSearch=500, language='en'. Wait for completion.✓ Copié
    → Run status SUCCEEDED with a dataset id
  3. Pull and clean the dataset
    Get the dataset items. Keep only name, address, phone, website, rating, reviewsCount. Drop places without a phone. Output as CSV.✓ Copié
    → CSV of 400–500 cleaned leads

Résultat : A de-duped lead list ready for CRM import, typically $1–3 in Apify credits.

Pièges
  • Running the wrong Actor — many copycats exist with worse reliability — Filter by usage count and last-updated in the Apify store; stick to the top 3 for a target
  • Massive datasets blow out your context window when returned inline — Ask Claude to page through items (limit+offset) or save to filesystem first, then summarize
Combiner avec : filesystem · postgres

Track Amazon product prices and stock status on a schedule

👤 E-commerce, affiliate marketers, competitive pricing teams ⏱ ~20 min intermediate

Quand l'utiliser : You want a daily price+stock snapshot for 200 ASINs without babysitting a scraper.

Prérequis
  • List of ASINs or product URLs — CSV of URLs like https://www.amazon.com/dp/B0XXXXXX
Déroulement
  1. Call the Amazon Product Scraper Actor
    Run Actor junglee/amazon-crawler with urls=<my list>, maxReviews=0, scrapeProductDetails=true.✓ Copié
    → Run finishes with a dataset of products
  2. Normalize price and stock
    From the dataset, extract asin, title, price, currency, in_stock (bool), seller. Flag any asin where price dropped vs my last snapshot [paste].✓ Copié
    → Per-ASIN current vs prior comparison
  3. Schedule it daily
    Create a daily Apify schedule for this Actor with the same inputs. Name it 'amazon-price-tracker-<category>'.✓ Copié
    → Schedule created; next run time shown

Résultat : A recurring price/stock watch with ~$0.30/day cost for 200 ASINs.

Pièges
  • Amazon aggressively throttles even with residential proxies — runs can partially fail — Enable Actor retries and accept that 5–10% of items may be missing; re-run failed asins in a small batch
Combiner avec : postgres · notion

Harvest recent posts from a public Twitter/X or TikTok profile

👤 Social-listening analysts, content researchers ⏱ ~20 min intermediate

Quand l'utiliser : You track a public figure or brand and want their last 30 days of posts as structured data for analysis.

Prérequis
  • Profile URL(s) of the target account — Public profile link only — do not attempt private or logged-in scraping
Déroulement
  1. Pick a reputable Twitter/TikTok Actor
    Find the top Apify Actor for fetching public tweets from a handle. Show pricing per 1000 tweets.✓ Copié
    → Actor shortlist with price-per-1k numbers
  2. Run for each target
    Run it for handles [list] with maxTweets=300 and start_date=30 days ago.✓ Copié
    → Dataset with tweets + engagement counts
  3. Summarize what changed in tone/topics
    Cluster these posts into 5 topics and show engagement averages per topic. Call out any sharp rise in one topic.✓ Copié
    → Topic table + trend commentary

Résultat : A structured social-post dataset plus 1-page topical trend summary.

Pièges
  • Scraping private/logged-in content violates platform ToS and can break at any moment — Stick to public profiles only; treat partial failures as expected, not as bugs to chase
Combiner avec : notion · postgres

Run a large crawl job async and collect results later

👤 Engineers running >10k-page crawls ⏱ ~45 min advanced

Quand l'utiliser : Your crawl will take 30 minutes to 6 hours — you don't want the MCP call to block that long.

Déroulement
  1. Kick off the run without waiting
    Start Actor apify/website-content-crawler with startUrls=[...], maxCrawlPages=10000. Return the runId, don't wait.✓ Copié
    → runId returned immediately
  2. Poll status periodically
    Check run <runId> status. How many pages done, how many errored, ETA?✓ Copié
    → Progress numbers
  3. Stream results when ready
    Run is SUCCEEDED. Page through the dataset 1000 items at a time and save each page to /crawls/<runId>/page-<n>.jsonl.✓ Copié
    → Local JSONL files ready for downstream processing

Résultat : A large crawl completed without blocking your chat session, and results on disk ready for indexing.

Pièges
  • Letting Claude pull the full dataset in one call — will OOM the context — Always page; never request the entire dataset at once
  • Costs balloon on deep crawls with no cap — Set maxCrawlPages AND memory/cpu limits on the Actor before starting
Combiner avec : filesystem · qdrant

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

apify + postgres

Scrape via Apify Actor then upsert normalized rows into your product DB

Run the Amazon Actor for my ASIN list, then upsert each result into the product_prices table with today's date.✓ Copié
apify + qdrant

Crawl a docs site then embed each page into a vector collection for RAG

Use Website Content Crawler on docs.stripe.com, then embed each page and upsert into the Qdrant stripe_docs collection.✓ Copié
apify + filesystem

Persist raw crawl output locally as JSONL before downstream processing

Run the Google Maps Actor for 'dentist Paris', save the raw dataset to /data/leads/paris-dentists.jsonl.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
search-actors query: str, limit?: int Discover which Actor fits your target before running free
get-actor actorId: str Inspect an Actor's input schema and pricing before calling free
call-actor actorId: str, input: object, timeout?: int Run an Actor and wait for it to finish (short runs only) Actor-specific; billed in Apify platform credits
get-dataset-items datasetId: str, limit?: int, offset?: int Page through a completed run's dataset free
get-run runId: str Poll status of a long-running job free

Coût et limites

Coût d'exécution

Quota d'API
Apify API is generous; Actors themselves are metered in platform credits
Tokens par appel
Actor input+output responses are usually 500–3000 tokens; large datasets should be paged
Monétaire
Free plan: $5 platform credit/month. Paid from $49/mo for $49+ credits. Per-Actor pricing varies ($0.25–$5 per 1000 results is typical).
Astuce
Always inspect Actor pricing via get-actor before calling; set maxResults/maxCrawlPages on every run to cap spend.

Sécurité

Permissions, secrets, portée

Portées minimales : Apify API token with default scope
Stockage des identifiants : API token in env var APIFY_TOKEN
Sortie de données : Calls to api.apify.com; Actors themselves may fetch any public URL you instruct them to
Ne jamais accorder : Root/admin tokens if user-scoped tokens suffice

Dépannage

Erreurs courantes et correctifs

401 Unauthorized

APIFY_TOKEN missing or revoked. Re-issue at console.apify.com/settings/integrations.

Vérifier : curl -H 'Authorization: Bearer $APIFY_TOKEN' https://api.apify.com/v2/users/me
Actor run FAILED with 'Not enough platform credits'

Top up in Apify console billing or pick a cheaper Actor variant; set maxResults to cap cost next time.

Run succeeds but dataset is empty

Wrong input schema — run get-actor to read the required field names, the Actor likely silently ignored your input.

Timeout waiting for call-actor

Long crawls exceed MCP call timeout; start the run, get the runId, then poll with get-run instead of blocking.

Alternatives

Apify vs autres

AlternativeQuand l'utiliserCompromis
Firecrawl MCPGeneric page-to-markdown scraping across any siteLess specialized for specific targets like Amazon or Maps
Bright Data MCPYou need heavy-duty residential proxies and SERP APIMore expensive; focused on unblocking rather than pre-built Actors
Playwright MCPYou need to script a custom flow (login, multi-step click-through)You write and maintain the scraping logic yourself

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills