/ Diretório / Playground / SkillCompass
● Comunidade Evol-ai ⚡ Instantâneo

SkillCompass

por Evol-ai · Evol-ai/SkillCompass

Evaluate Agent Skill quality — find the weakest link, fix it, and prove the fix worked with before/after metrics.

SkillCompass scores your Agent Skills on clarity, activation rate, downstream correctness, and context cost. It highlights the skill most likely to be hurting your agent's performance, suggests a fix, and re-runs the evaluation so you can show the improvement. Useful when you have a shelf of skills and don't know which are actually earning their context weight.

Por que usar

Principais recursos

Demo ao vivo

Como fica na prática

skillcompass-skill.replay ▶ pronto
0/0

Instalar

Escolha seu cliente

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Abra Claude Desktop → Settings → Developer → Edit Config. Reinicie após salvar.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Cursor usa o mesmo esquema mcpServers que o Claude Desktop. Config de projeto vence a global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Clique no ícone MCP Servers na barra lateral do Cline, depois "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Mesmo formato do Claude Desktop. Reinicie o Windsurf para aplicar.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "skillcompass-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ]
    }
  ]
}

O Continue usa um array de objetos de servidor em vez de um map.

~/.config/zed/settings.json
{
  "context_servers": {
    "skillcompass-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Evol-ai/SkillCompass",
          "~/.claude/skills/SkillCompass"
        ]
      }
    }
  }
}

Adicione em context_servers. Zed recarrega automaticamente ao salvar.

claude mcp add skillcompass-skill -- git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass

Uma linha só. Verifique com claude mcp list. Remova com claude mcp remove.

Casos de uso

Usos do mundo real: SkillCompass

Find the skill pulling your agent's performance down

👤 Skill authors with 5+ skills installed ⏱ ~45 min advanced

Quando usar: You feel the agent has gotten worse, not better, as you added skills.

Pré-requisitos
  • Node 20+ — nvm install 20
  • Skill cloned and installed — git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass; npm i
Fluxo
  1. Run the evaluator
    Score all skills in ~/.claude/skills/ — show me the weakest link.✓ Copiado
    → Ranked skill list with per-dimension scores
  2. Diagnose the loser
    For the weakest skill, what specifically is wrong?✓ Copiado
    → Concrete critique (vague description, conflicting with other skill, etc.)
  3. Propose a fix
    Suggest a minimal edit to SKILL.md to fix it.✓ Copiado
    → Small, reviewable diff
  4. Re-evaluate
    Re-run the eval and show before/after.✓ Copiado
    → Metrics improved, with evidence

Resultado: A measurably better skill bundle, with a reproducible eval process.

Armadilhas
  • Gaming the eval metric instead of helping real tasks — Include task-level downstream metrics (actual agent outcomes), not just text-level

Review a new skill before you publish it

👤 Skill authors shipping their first bundle ⏱ ~20 min intermediate

Quando usar: Before pushing to GitHub and telling the world about your skill.

Fluxo
  1. Score the draft
    Evaluate my draft skill at ./my-skill/.✓ Copiado
    → Dimension scores
  2. Fix obvious issues
    Apply the low-hanging suggestions✓ Copiado
    → Edits in SKILL.md

Resultado: A publication-ready skill rather than a rough draft.

Armadilhas
  • Chasing a perfect score — Ship when scores plateau — diminishing returns

Combinações

Combine com outros MCPs para 10× de alavancagem

skillcompass-skill + skill-optimizer-skill

Two complementary tools: SkillCompass ranks, skill-optimizer drills into SKILL.md patterns

Use SkillCompass to pick the worst skill; use skill-optimizer to deeply analyze its SKILL.md.✓ Copiado
skillcompass-skill + filesystem

Operate across the full ~/.claude/skills/ directory

Evaluate every skill in ~/.claude/skills/ and give me a CSV.✓ Copiado

Ferramentas

O que este MCP expõe

FerramentaEntradasQuando chamarCusto
skill-scoring skill path(s) Periodic audits eval compute
weakest-link-id bundle scores After scoring 0
fix-suggestion weak skill + critique Before editing 0
before-after-eval pre/post SKILL.md After applying fixes eval compute

Custo e limites

O que custa rodar

Cota de API
none beyond your LLM provider (evals use LLM calls)
Tokens por chamada
evals can be heavy — budget 20–100k tokens for a full bundle scan
Monetário
free, MIT
Dica
Run on one skill at a time during iteration; bundle runs only for audits

Segurança

Permissões, segredos, alcance

Armazenamento de credenciais: none at skill level
Saída de dados: none beyond your LLM provider

Solução de problemas

Erros comuns e correções

Node errors on install

Ensure Node 20+; npm i inside the skill directory.

Verificar: node -v
Evals are inconsistent run-to-run

Fix the task seed and use a non-stochastic sample; record provider+model.

Alternativas

SkillCompass vs. outros

AlternativaQuando usarTroca
skill-optimizer-skillYou want a single skill analyzed deeply rather than a bundle rankedDepth over breadth
manual reviewYou have 1–2 skills totalDoesn't scale

Mais

Recursos

📖 Leia o README oficial no GitHub

🐙 Ver issues abertas

🔍 Ver todos os 400+ servidores MCP e Skills