/ Каталог / Песочница / buyer-eval-skill
● Сообщество salespeak-ai ⚡ Сразу

buyer-eval-skill

автор salespeak-ai · salespeak-ai/buyer-eval-skill

B2B vendor evaluation skill — domain-expert questions, evidence-based scoring, and structured vendor interviews.

buyer-eval-skill makes Claude your B2B procurement lead: it asks domain-expert questions of the vendor (or their AI agent), records evidence against your evaluation rubric, scores each criterion, and produces a comparison matrix when you're evaluating multiple vendors. Reduces the time from 'let's look at vendors' to 'here's my recommendation with receipts'.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

buyer-eval-skill.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "buyer-eval-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "buyer-eval-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/salespeak-ai/buyer-eval-skill",
          "~/.claude/skills/buyer-eval-skill"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add buyer-eval-skill -- git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: buyer-eval-skill

How to evaluate three SaaS vendors against your requirements

👤 Buyers, RevOps, procurement leads ⏱ ~120 min intermediate

Когда использовать: You've shortlisted 3 vendors and need to pick one defensibly.

Предварительные требования
  • Skill cloned — git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill
Поток
  1. Define the rubric
    We're buying a customer support platform. Propose a weighted rubric (feature depth, integrations, price, security, support).✓ Скопировано
    → Weighted criteria with rationale
  2. Interview each vendor
    Use the skill's question bank to interview vendor A. Their docs / AI agent is at <url>. Record evidence per criterion.✓ Скопировано
    → Per-criterion evidence snippets with sources
  3. Score and compare
    Score all three and produce a comparison matrix with a recommendation.✓ Скопировано
    → Matrix + recommendation + named risks

Итог: A defensible vendor pick with evidence, not vibes.

Подводные камни
  • Rubric favors the vendor you already like — Lock the rubric before seeing demos
  • Evidence is just vendor marketing claims — Weight third-party sources (docs, G2, case studies) over marketing
Сочетать с: filesystem

Compare RFP responses against your requirements

👤 Procurement teams drowning in RFP responses ⏱ ~90 min intermediate

Когда использовать: You sent an RFP, got 5 responses, and need to rank them.

Поток
  1. Load the responses
    Here are 5 RFP response PDFs in rfp/. Extract answers per requirement.✓ Скопировано
    → Per-requirement matrix
  2. Flag gaps and fluff
    Where did a vendor answer the wrong question or punt? Flag fluff.✓ Скопировано
    → Honest read of each response
  3. Rank
    Rank the responses and write a short memo for the evaluation committee.✓ Скопировано
    → Ranked memo

Итог: A fair, fast RFP review.

Подводные камни
  • Rewarding the best-written response over the best fit — Score on substance; penalize fluff

Комбинации

Сочетайте с другими MCP — эффект x10

buyer-eval-skill + filesystem

Read PDFs/docs from a local folder during evaluation

Read rfp/responses/ and extract answers per requirement.✓ Скопировано
buyer-eval-skill + google-ai-mode-skill

Cross-check vendor claims against public info

For each vendor claim that isn't in their docs, run a quick web check.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
rubric-design domain + must-haves Start of any evaluation 0
vendor-interview vendor source (docs/AI agent) Per vendor 0
scoring-and-matrix evidence per vendor + rubric After interviews 0
recommendation-memo matrix Final step 0

Стоимость и лимиты

Во что обходится

Квота API
none beyond LLM
Токенов на вызов
10–40k per full eval
Деньги
free
Совет
Interview one vendor at a time — keeps context focused and evidence clean

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: none
Исходящий трафик: only if the skill calls out to vendor agents you point it at

Устранение неполадок

Частые ошибки и исправления

Evidence is shallow

Point Claude at primary docs; fallback to published case studies

Scores feel gamed

Re-lock the rubric and re-run scoring blind to vendor identity if possible

Альтернативы

buyer-eval-skill в сравнении

АльтернативаКогда использоватьКомпромисс
creative-director-skillEvaluating creative concepts, not B2B softwareDifferent judgment domain

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills