/ Directory / Playground / Apify
● Official apify 🔑 Needs your key

Apify

by apify · apify/apify-mcp-server

Tap 3000+ pre-built Actors on Apify to scrape Google, Amazon, LinkedIn, TikTok, Maps and more — no custom scraping code to maintain.

Apify's official MCP exposes the Apify Actor marketplace as callable tools. Instead of writing your own scraper for each site, you pick an existing battle-tested Actor, pass inputs, and stream back structured JSON. Best for niche targets (Google Maps listings, Amazon products, Twitter profiles) where a generic scraper would need constant maintenance.

Why use it

Key features

Live Demo

What it looks like in practice

apify.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "apify",
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "apify": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@apify/actors-mcp-server"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add apify -- npx -y @apify/actors-mcp-server

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Apify

Scrape Google Maps listings for local-business lead generation

👤 Sales / SDR teams building territory lists ⏱ ~15 min beginner

When to use: You need 500 'coffee shops in Berlin' with address, phone, website, and rating — and you don't want to be blocked mid-run.

Prerequisites
  • Apify account + API token — console.apify.com → Settings → Integrations → API token
  • Enough platform credits on Apify for your run size — Free plan gives $5 credit/month; most Google Maps runs cost ~$1 per 1000 places
Flow
  1. Pick the right Actor for your target
    Find the best-maintained Apify Actor for scraping Google Maps places. Prefer one with >5 stars and recent updates.✓ Copied
    → Actor slug like compass/crawler-google-places with its input schema
  2. Run it with your query
    Run that Actor with searchStringsArray=['coffee shop Berlin'], maxCrawledPlacesPerSearch=500, language='en'. Wait for completion.✓ Copied
    → Run status SUCCEEDED with a dataset id
  3. Pull and clean the dataset
    Get the dataset items. Keep only name, address, phone, website, rating, reviewsCount. Drop places without a phone. Output as CSV.✓ Copied
    → CSV of 400–500 cleaned leads

Outcome: A de-duped lead list ready for CRM import, typically $1–3 in Apify credits.

Pitfalls
  • Running the wrong Actor — many copycats exist with worse reliability — Filter by usage count and last-updated in the Apify store; stick to the top 3 for a target
  • Massive datasets blow out your context window when returned inline — Ask Claude to page through items (limit+offset) or save to filesystem first, then summarize
Combine with: filesystem · postgres

Track Amazon product prices and stock status on a schedule

👤 E-commerce, affiliate marketers, competitive pricing teams ⏱ ~20 min intermediate

When to use: You want a daily price+stock snapshot for 200 ASINs without babysitting a scraper.

Prerequisites
  • List of ASINs or product URLs — CSV of URLs like https://www.amazon.com/dp/B0XXXXXX
Flow
  1. Call the Amazon Product Scraper Actor
    Run Actor junglee/amazon-crawler with urls=<my list>, maxReviews=0, scrapeProductDetails=true.✓ Copied
    → Run finishes with a dataset of products
  2. Normalize price and stock
    From the dataset, extract asin, title, price, currency, in_stock (bool), seller. Flag any asin where price dropped vs my last snapshot [paste].✓ Copied
    → Per-ASIN current vs prior comparison
  3. Schedule it daily
    Create a daily Apify schedule for this Actor with the same inputs. Name it 'amazon-price-tracker-<category>'.✓ Copied
    → Schedule created; next run time shown

Outcome: A recurring price/stock watch with ~$0.30/day cost for 200 ASINs.

Pitfalls
  • Amazon aggressively throttles even with residential proxies — runs can partially fail — Enable Actor retries and accept that 5–10% of items may be missing; re-run failed asins in a small batch
Combine with: postgres · notion

Harvest recent posts from a public Twitter/X or TikTok profile

👤 Social-listening analysts, content researchers ⏱ ~20 min intermediate

When to use: You track a public figure or brand and want their last 30 days of posts as structured data for analysis.

Prerequisites
  • Profile URL(s) of the target account — Public profile link only — do not attempt private or logged-in scraping
Flow
  1. Pick a reputable Twitter/TikTok Actor
    Find the top Apify Actor for fetching public tweets from a handle. Show pricing per 1000 tweets.✓ Copied
    → Actor shortlist with price-per-1k numbers
  2. Run for each target
    Run it for handles [list] with maxTweets=300 and start_date=30 days ago.✓ Copied
    → Dataset with tweets + engagement counts
  3. Summarize what changed in tone/topics
    Cluster these posts into 5 topics and show engagement averages per topic. Call out any sharp rise in one topic.✓ Copied
    → Topic table + trend commentary

Outcome: A structured social-post dataset plus 1-page topical trend summary.

Pitfalls
  • Scraping private/logged-in content violates platform ToS and can break at any moment — Stick to public profiles only; treat partial failures as expected, not as bugs to chase
Combine with: notion · postgres

Run a large crawl job async and collect results later

👤 Engineers running >10k-page crawls ⏱ ~45 min advanced

When to use: Your crawl will take 30 minutes to 6 hours — you don't want the MCP call to block that long.

Flow
  1. Kick off the run without waiting
    Start Actor apify/website-content-crawler with startUrls=[...], maxCrawlPages=10000. Return the runId, don't wait.✓ Copied
    → runId returned immediately
  2. Poll status periodically
    Check run <runId> status. How many pages done, how many errored, ETA?✓ Copied
    → Progress numbers
  3. Stream results when ready
    Run is SUCCEEDED. Page through the dataset 1000 items at a time and save each page to /crawls/<runId>/page-<n>.jsonl.✓ Copied
    → Local JSONL files ready for downstream processing

Outcome: A large crawl completed without blocking your chat session, and results on disk ready for indexing.

Pitfalls
  • Letting Claude pull the full dataset in one call — will OOM the context — Always page; never request the entire dataset at once
  • Costs balloon on deep crawls with no cap — Set maxCrawlPages AND memory/cpu limits on the Actor before starting
Combine with: filesystem · qdrant

Combinations

Pair with other MCPs for X10 leverage

apify + postgres

Scrape via Apify Actor then upsert normalized rows into your product DB

Run the Amazon Actor for my ASIN list, then upsert each result into the product_prices table with today's date.✓ Copied
apify + qdrant

Crawl a docs site then embed each page into a vector collection for RAG

Use Website Content Crawler on docs.stripe.com, then embed each page and upsert into the Qdrant stripe_docs collection.✓ Copied
apify + filesystem

Persist raw crawl output locally as JSONL before downstream processing

Run the Google Maps Actor for 'dentist Paris', save the raw dataset to /data/leads/paris-dentists.jsonl.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
search-actors query: str, limit?: int Discover which Actor fits your target before running free
get-actor actorId: str Inspect an Actor's input schema and pricing before calling free
call-actor actorId: str, input: object, timeout?: int Run an Actor and wait for it to finish (short runs only) Actor-specific; billed in Apify platform credits
get-dataset-items datasetId: str, limit?: int, offset?: int Page through a completed run's dataset free
get-run runId: str Poll status of a long-running job free

Cost & Limits

What this costs to run

API quota
Apify API is generous; Actors themselves are metered in platform credits
Tokens per call
Actor input+output responses are usually 500–3000 tokens; large datasets should be paged
Monetary
Free plan: $5 platform credit/month. Paid from $49/mo for $49+ credits. Per-Actor pricing varies ($0.25–$5 per 1000 results is typical).
Tip
Always inspect Actor pricing via get-actor before calling; set maxResults/maxCrawlPages on every run to cap spend.

Security

Permissions, secrets, blast radius

Minimum scopes: Apify API token with default scope
Credential storage: API token in env var APIFY_TOKEN
Data egress: Calls to api.apify.com; Actors themselves may fetch any public URL you instruct them to
Never grant: Root/admin tokens if user-scoped tokens suffice

Troubleshooting

Common errors and fixes

401 Unauthorized

APIFY_TOKEN missing or revoked. Re-issue at console.apify.com/settings/integrations.

Verify: curl -H 'Authorization: Bearer $APIFY_TOKEN' https://api.apify.com/v2/users/me
Actor run FAILED with 'Not enough platform credits'

Top up in Apify console billing or pick a cheaper Actor variant; set maxResults to cap cost next time.

Run succeeds but dataset is empty

Wrong input schema — run get-actor to read the required field names, the Actor likely silently ignored your input.

Timeout waiting for call-actor

Long crawls exceed MCP call timeout; start the run, get the runId, then poll with get-run instead of blocking.

Alternatives

Apify vs others

AlternativeWhen to use it insteadTradeoff
Firecrawl MCPGeneric page-to-markdown scraping across any siteLess specialized for specific targets like Amazon or Maps
Bright Data MCPYou need heavy-duty residential proxies and SERP APIMore expensive; focused on unblocking rather than pre-built Actors
Playwright MCPYou need to script a custom flow (login, multi-step click-through)You write and maintain the scraping logic yourself

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills