/ 디렉터리 / 플레이그라운드 / buyer-eval-skill
● 커뮤니티 salespeak-ai ⚡ 바로 사용

buyer-eval-skill

제작: salespeak-ai · salespeak-ai/buyer-eval-skill

B2B vendor evaluation skill — domain-expert questions, evidence-based scoring, and structured vendor interviews.

buyer-eval-skill makes Claude your B2B procurement lead: it asks domain-expert questions of the vendor (or their AI agent), records evidence against your evaluation rubric, scores each criterion, and produces a comparison matrix when you're evaluating multiple vendors. Reduces the time from 'let's look at vendors' to 'here's my recommendation with receipts'.

왜 쓰나요

핵심 기능

라이브 데모

실제 사용 모습

buyer-eval-skill.replay ▶ 준비됨
0/0

설치

클라이언트 선택

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config 열기. 저장 후 앱 재시작.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Cursor는 Claude Desktop과 동일한 mcpServers 스키마 사용. 프로젝트 설정이 전역보다 우선.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Cline 사이드바의 MCP Servers 아이콘 클릭 후 "Edit Configuration" 선택.

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop과 같은 형식. Windsurf 재시작 후 적용.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "buyer-eval-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ]
    }
  ]
}

Continue는 맵이 아닌 서버 오브젝트 배열 사용.

~/.config/zed/settings.json
{
  "context_servers": {
    "buyer-eval-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/salespeak-ai/buyer-eval-skill",
          "~/.claude/skills/buyer-eval-skill"
        ]
      }
    }
  }
}

context_servers에 추가. 저장 시 Zed가 핫 리로드.

claude mcp add buyer-eval-skill -- git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill

한 줄 명령. claude mcp list로 확인, claude mcp remove로 제거.

사용 사례

실전 활용법: buyer-eval-skill

How to evaluate three SaaS vendors against your requirements

👤 Buyers, RevOps, procurement leads ⏱ ~120 min intermediate

언제 쓸까: You've shortlisted 3 vendors and need to pick one defensibly.

사전 조건
  • Skill cloned — git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill
흐름
  1. Define the rubric
    We're buying a customer support platform. Propose a weighted rubric (feature depth, integrations, price, security, support).✓ 복사됨
    → Weighted criteria with rationale
  2. Interview each vendor
    Use the skill's question bank to interview vendor A. Their docs / AI agent is at <url>. Record evidence per criterion.✓ 복사됨
    → Per-criterion evidence snippets with sources
  3. Score and compare
    Score all three and produce a comparison matrix with a recommendation.✓ 복사됨
    → Matrix + recommendation + named risks

결과: A defensible vendor pick with evidence, not vibes.

함정
  • Rubric favors the vendor you already like — Lock the rubric before seeing demos
  • Evidence is just vendor marketing claims — Weight third-party sources (docs, G2, case studies) over marketing
함께 쓰기: filesystem

Compare RFP responses against your requirements

👤 Procurement teams drowning in RFP responses ⏱ ~90 min intermediate

언제 쓸까: You sent an RFP, got 5 responses, and need to rank them.

흐름
  1. Load the responses
    Here are 5 RFP response PDFs in rfp/. Extract answers per requirement.✓ 복사됨
    → Per-requirement matrix
  2. Flag gaps and fluff
    Where did a vendor answer the wrong question or punt? Flag fluff.✓ 복사됨
    → Honest read of each response
  3. Rank
    Rank the responses and write a short memo for the evaluation committee.✓ 복사됨
    → Ranked memo

결과: A fair, fast RFP review.

함정
  • Rewarding the best-written response over the best fit — Score on substance; penalize fluff

조합

다른 MCP와 조합해 10배 효율

buyer-eval-skill + filesystem

Read PDFs/docs from a local folder during evaluation

Read rfp/responses/ and extract answers per requirement.✓ 복사됨
buyer-eval-skill + google-ai-mode-skill

Cross-check vendor claims against public info

For each vendor claim that isn't in their docs, run a quick web check.✓ 복사됨

도구

이 MCP가 노출하는 것

도구입력언제 호출비용
rubric-design domain + must-haves Start of any evaluation 0
vendor-interview vendor source (docs/AI agent) Per vendor 0
scoring-and-matrix evidence per vendor + rubric After interviews 0
recommendation-memo matrix Final step 0

비용 및 제한

운영 비용

API 쿼터
none beyond LLM
호출당 토큰
10–40k per full eval
금액
free
Interview one vendor at a time — keeps context focused and evidence clean

보안

권한, 시크릿, 파급범위

자격 증명 저장: none
데이터 외부 송신: only if the skill calls out to vendor agents you point it at

문제 해결

자주 발생하는 오류와 해결

Evidence is shallow

Point Claude at primary docs; fallback to published case studies

Scores feel gamed

Re-lock the rubric and re-run scoring blind to vendor identity if possible

대안

buyer-eval-skill 다른 것과 비교

대안언제 쓰나단점/장점
creative-director-skillEvaluating creative concepts, not B2B softwareDifferent judgment domain

더 보기

리소스

📖 GitHub에서 공식 README 읽기

🐙 열린 이슈 보기

🔍 400+ MCP 서버 및 Skills 전체 보기