/ 디렉터리 / 플레이그라운드 / Apify
● 공식 apify 🔑 본인 키 필요

Apify

제작: apify · apify/apify-mcp-server

Tap 3000+ pre-built Actors on Apify to scrape Google, Amazon, LinkedIn, TikTok, Maps and more — no custom scraping code to maintain.

Apify's official MCP exposes the Apify Actor marketplace as callable tools. Instead of writing your own scraper for each site, you pick an existing battle-tested Actor, pass inputs, and stream back structured JSON. Best for niche targets (Google Maps listings, Amazon products, Twitter profiles) where a generic scraper would need constant maintenance.

왜 쓰나요

핵심 기능

라이브 데모

실제 사용 모습

apify.replay ▶ 준비됨
0/0

설치

클라이언트 선택

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config 열기. 저장 후 앱 재시작.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cursor는 Claude Desktop과 동일한 mcpServers 스키마 사용. 프로젝트 설정이 전역보다 우선.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Cline 사이드바의 MCP Servers 아이콘 클릭 후 "Edit Configuration" 선택.

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "apify": {
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  }
}

Claude Desktop과 같은 형식. Windsurf 재시작 후 적용.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "apify",
      "command": "npx",
      "args": [
        "-y",
        "@apify/actors-mcp-server"
      ]
    }
  ]
}

Continue는 맵이 아닌 서버 오브젝트 배열 사용.

~/.config/zed/settings.json
{
  "context_servers": {
    "apify": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@apify/actors-mcp-server"
        ]
      }
    }
  }
}

context_servers에 추가. 저장 시 Zed가 핫 리로드.

claude mcp add apify -- npx -y @apify/actors-mcp-server

한 줄 명령. claude mcp list로 확인, claude mcp remove로 제거.

사용 사례

실전 활용법: Apify

Scrape Google Maps listings for local-business lead generation

👤 Sales / SDR teams building territory lists ⏱ ~15 min beginner

언제 쓸까: You need 500 'coffee shops in Berlin' with address, phone, website, and rating — and you don't want to be blocked mid-run.

사전 조건
  • Apify account + API token — console.apify.com → Settings → Integrations → API token
  • Enough platform credits on Apify for your run size — Free plan gives $5 credit/month; most Google Maps runs cost ~$1 per 1000 places
흐름
  1. Pick the right Actor for your target
    Find the best-maintained Apify Actor for scraping Google Maps places. Prefer one with >5 stars and recent updates.✓ 복사됨
    → Actor slug like compass/crawler-google-places with its input schema
  2. Run it with your query
    Run that Actor with searchStringsArray=['coffee shop Berlin'], maxCrawledPlacesPerSearch=500, language='en'. Wait for completion.✓ 복사됨
    → Run status SUCCEEDED with a dataset id
  3. Pull and clean the dataset
    Get the dataset items. Keep only name, address, phone, website, rating, reviewsCount. Drop places without a phone. Output as CSV.✓ 복사됨
    → CSV of 400–500 cleaned leads

결과: A de-duped lead list ready for CRM import, typically $1–3 in Apify credits.

함정
  • Running the wrong Actor — many copycats exist with worse reliability — Filter by usage count and last-updated in the Apify store; stick to the top 3 for a target
  • Massive datasets blow out your context window when returned inline — Ask Claude to page through items (limit+offset) or save to filesystem first, then summarize
함께 쓰기: filesystem · postgres

Track Amazon product prices and stock status on a schedule

👤 E-commerce, affiliate marketers, competitive pricing teams ⏱ ~20 min intermediate

언제 쓸까: You want a daily price+stock snapshot for 200 ASINs without babysitting a scraper.

사전 조건
  • List of ASINs or product URLs — CSV of URLs like https://www.amazon.com/dp/B0XXXXXX
흐름
  1. Call the Amazon Product Scraper Actor
    Run Actor junglee/amazon-crawler with urls=<my list>, maxReviews=0, scrapeProductDetails=true.✓ 복사됨
    → Run finishes with a dataset of products
  2. Normalize price and stock
    From the dataset, extract asin, title, price, currency, in_stock (bool), seller. Flag any asin where price dropped vs my last snapshot [paste].✓ 복사됨
    → Per-ASIN current vs prior comparison
  3. Schedule it daily
    Create a daily Apify schedule for this Actor with the same inputs. Name it 'amazon-price-tracker-<category>'.✓ 복사됨
    → Schedule created; next run time shown

결과: A recurring price/stock watch with ~$0.30/day cost for 200 ASINs.

함정
  • Amazon aggressively throttles even with residential proxies — runs can partially fail — Enable Actor retries and accept that 5–10% of items may be missing; re-run failed asins in a small batch
함께 쓰기: postgres · notion

Harvest recent posts from a public Twitter/X or TikTok profile

👤 Social-listening analysts, content researchers ⏱ ~20 min intermediate

언제 쓸까: You track a public figure or brand and want their last 30 days of posts as structured data for analysis.

사전 조건
  • Profile URL(s) of the target account — Public profile link only — do not attempt private or logged-in scraping
흐름
  1. Pick a reputable Twitter/TikTok Actor
    Find the top Apify Actor for fetching public tweets from a handle. Show pricing per 1000 tweets.✓ 복사됨
    → Actor shortlist with price-per-1k numbers
  2. Run for each target
    Run it for handles [list] with maxTweets=300 and start_date=30 days ago.✓ 복사됨
    → Dataset with tweets + engagement counts
  3. Summarize what changed in tone/topics
    Cluster these posts into 5 topics and show engagement averages per topic. Call out any sharp rise in one topic.✓ 복사됨
    → Topic table + trend commentary

결과: A structured social-post dataset plus 1-page topical trend summary.

함정
  • Scraping private/logged-in content violates platform ToS and can break at any moment — Stick to public profiles only; treat partial failures as expected, not as bugs to chase
함께 쓰기: notion · postgres

Run a large crawl job async and collect results later

👤 Engineers running >10k-page crawls ⏱ ~45 min advanced

언제 쓸까: Your crawl will take 30 minutes to 6 hours — you don't want the MCP call to block that long.

흐름
  1. Kick off the run without waiting
    Start Actor apify/website-content-crawler with startUrls=[...], maxCrawlPages=10000. Return the runId, don't wait.✓ 복사됨
    → runId returned immediately
  2. Poll status periodically
    Check run <runId> status. How many pages done, how many errored, ETA?✓ 복사됨
    → Progress numbers
  3. Stream results when ready
    Run is SUCCEEDED. Page through the dataset 1000 items at a time and save each page to /crawls/<runId>/page-<n>.jsonl.✓ 복사됨
    → Local JSONL files ready for downstream processing

결과: A large crawl completed without blocking your chat session, and results on disk ready for indexing.

함정
  • Letting Claude pull the full dataset in one call — will OOM the context — Always page; never request the entire dataset at once
  • Costs balloon on deep crawls with no cap — Set maxCrawlPages AND memory/cpu limits on the Actor before starting
함께 쓰기: filesystem · qdrant

조합

다른 MCP와 조합해 10배 효율

apify + postgres

Scrape via Apify Actor then upsert normalized rows into your product DB

Run the Amazon Actor for my ASIN list, then upsert each result into the product_prices table with today's date.✓ 복사됨
apify + qdrant

Crawl a docs site then embed each page into a vector collection for RAG

Use Website Content Crawler on docs.stripe.com, then embed each page and upsert into the Qdrant stripe_docs collection.✓ 복사됨
apify + filesystem

Persist raw crawl output locally as JSONL before downstream processing

Run the Google Maps Actor for 'dentist Paris', save the raw dataset to /data/leads/paris-dentists.jsonl.✓ 복사됨

도구

이 MCP가 노출하는 것

도구입력언제 호출비용
search-actors query: str, limit?: int Discover which Actor fits your target before running free
get-actor actorId: str Inspect an Actor's input schema and pricing before calling free
call-actor actorId: str, input: object, timeout?: int Run an Actor and wait for it to finish (short runs only) Actor-specific; billed in Apify platform credits
get-dataset-items datasetId: str, limit?: int, offset?: int Page through a completed run's dataset free
get-run runId: str Poll status of a long-running job free

비용 및 제한

운영 비용

API 쿼터
Apify API is generous; Actors themselves are metered in platform credits
호출당 토큰
Actor input+output responses are usually 500–3000 tokens; large datasets should be paged
금액
Free plan: $5 platform credit/month. Paid from $49/mo for $49+ credits. Per-Actor pricing varies ($0.25–$5 per 1000 results is typical).
Always inspect Actor pricing via get-actor before calling; set maxResults/maxCrawlPages on every run to cap spend.

보안

권한, 시크릿, 파급범위

최소 스코프: Apify API token with default scope
자격 증명 저장: API token in env var APIFY_TOKEN
데이터 외부 송신: Calls to api.apify.com; Actors themselves may fetch any public URL you instruct them to
절대 부여 금지: Root/admin tokens if user-scoped tokens suffice

문제 해결

자주 발생하는 오류와 해결

401 Unauthorized

APIFY_TOKEN missing or revoked. Re-issue at console.apify.com/settings/integrations.

확인: curl -H 'Authorization: Bearer $APIFY_TOKEN' https://api.apify.com/v2/users/me
Actor run FAILED with 'Not enough platform credits'

Top up in Apify console billing or pick a cheaper Actor variant; set maxResults to cap cost next time.

Run succeeds but dataset is empty

Wrong input schema — run get-actor to read the required field names, the Actor likely silently ignored your input.

Timeout waiting for call-actor

Long crawls exceed MCP call timeout; start the run, get the runId, then poll with get-run instead of blocking.

대안

Apify 다른 것과 비교

대안언제 쓰나단점/장점
Firecrawl MCPGeneric page-to-markdown scraping across any siteLess specialized for specific targets like Amazon or Maps
Bright Data MCPYou need heavy-duty residential proxies and SERP APIMore expensive; focused on unblocking rather than pre-built Actors
Playwright MCPYou need to script a custom flow (login, multi-step click-through)You write and maintain the scraping logic yourself

더 보기

리소스

📖 GitHub에서 공식 README 읽기

🐙 열린 이슈 보기

🔍 400+ MCP 서버 및 Skills 전체 보기