/ Diretório / Playground / buyer-eval-skill
● Comunidade salespeak-ai ⚡ Instantâneo

buyer-eval-skill

por salespeak-ai · salespeak-ai/buyer-eval-skill

B2B vendor evaluation skill — domain-expert questions, evidence-based scoring, and structured vendor interviews.

buyer-eval-skill makes Claude your B2B procurement lead: it asks domain-expert questions of the vendor (or their AI agent), records evidence against your evaluation rubric, scores each criterion, and produces a comparison matrix when you're evaluating multiple vendors. Reduces the time from 'let's look at vendors' to 'here's my recommendation with receipts'.

Por que usar

Principais recursos

Demo ao vivo

Como fica na prática

buyer-eval-skill.replay ▶ pronto
0/0

Instalar

Escolha seu cliente

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Abra Claude Desktop → Settings → Developer → Edit Config. Reinicie após salvar.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Cursor usa o mesmo esquema mcpServers que o Claude Desktop. Config de projeto vence a global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Clique no ícone MCP Servers na barra lateral do Cline, depois "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "buyer-eval-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ],
      "_inferred": true
    }
  }
}

Mesmo formato do Claude Desktop. Reinicie o Windsurf para aplicar.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "buyer-eval-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/salespeak-ai/buyer-eval-skill",
        "~/.claude/skills/buyer-eval-skill"
      ]
    }
  ]
}

O Continue usa um array de objetos de servidor em vez de um map.

~/.config/zed/settings.json
{
  "context_servers": {
    "buyer-eval-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/salespeak-ai/buyer-eval-skill",
          "~/.claude/skills/buyer-eval-skill"
        ]
      }
    }
  }
}

Adicione em context_servers. Zed recarrega automaticamente ao salvar.

claude mcp add buyer-eval-skill -- git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill

Uma linha só. Verifique com claude mcp list. Remova com claude mcp remove.

Casos de uso

Usos do mundo real: buyer-eval-skill

How to evaluate three SaaS vendors against your requirements

👤 Buyers, RevOps, procurement leads ⏱ ~120 min intermediate

Quando usar: You've shortlisted 3 vendors and need to pick one defensibly.

Pré-requisitos
  • Skill cloned — git clone https://github.com/salespeak-ai/buyer-eval-skill ~/.claude/skills/buyer-eval-skill
Fluxo
  1. Define the rubric
    We're buying a customer support platform. Propose a weighted rubric (feature depth, integrations, price, security, support).✓ Copiado
    → Weighted criteria with rationale
  2. Interview each vendor
    Use the skill's question bank to interview vendor A. Their docs / AI agent is at <url>. Record evidence per criterion.✓ Copiado
    → Per-criterion evidence snippets with sources
  3. Score and compare
    Score all three and produce a comparison matrix with a recommendation.✓ Copiado
    → Matrix + recommendation + named risks

Resultado: A defensible vendor pick with evidence, not vibes.

Armadilhas
  • Rubric favors the vendor you already like — Lock the rubric before seeing demos
  • Evidence is just vendor marketing claims — Weight third-party sources (docs, G2, case studies) over marketing
Combine com: filesystem

Compare RFP responses against your requirements

👤 Procurement teams drowning in RFP responses ⏱ ~90 min intermediate

Quando usar: You sent an RFP, got 5 responses, and need to rank them.

Fluxo
  1. Load the responses
    Here are 5 RFP response PDFs in rfp/. Extract answers per requirement.✓ Copiado
    → Per-requirement matrix
  2. Flag gaps and fluff
    Where did a vendor answer the wrong question or punt? Flag fluff.✓ Copiado
    → Honest read of each response
  3. Rank
    Rank the responses and write a short memo for the evaluation committee.✓ Copiado
    → Ranked memo

Resultado: A fair, fast RFP review.

Armadilhas
  • Rewarding the best-written response over the best fit — Score on substance; penalize fluff

Combinações

Combine com outros MCPs para 10× de alavancagem

buyer-eval-skill + filesystem

Read PDFs/docs from a local folder during evaluation

Read rfp/responses/ and extract answers per requirement.✓ Copiado
buyer-eval-skill + google-ai-mode-skill

Cross-check vendor claims against public info

For each vendor claim that isn't in their docs, run a quick web check.✓ Copiado

Ferramentas

O que este MCP expõe

FerramentaEntradasQuando chamarCusto
rubric-design domain + must-haves Start of any evaluation 0
vendor-interview vendor source (docs/AI agent) Per vendor 0
scoring-and-matrix evidence per vendor + rubric After interviews 0
recommendation-memo matrix Final step 0

Custo e limites

O que custa rodar

Cota de API
none beyond LLM
Tokens por chamada
10–40k per full eval
Monetário
free
Dica
Interview one vendor at a time — keeps context focused and evidence clean

Segurança

Permissões, segredos, alcance

Armazenamento de credenciais: none
Saída de dados: only if the skill calls out to vendor agents you point it at

Solução de problemas

Erros comuns e correções

Evidence is shallow

Point Claude at primary docs; fallback to published case studies

Scores feel gamed

Re-lock the rubric and re-run scoring blind to vendor identity if possible

Alternativas

buyer-eval-skill vs. outros

AlternativaQuando usarTroca
creative-director-skillEvaluating creative concepts, not B2B softwareDifferent judgment domain

Mais

Recursos

📖 Leia o README oficial no GitHub

🐙 Ver issues abertas

🔍 Ver todos os 400+ servidores MCP e Skills