/ Каталог / Песочница / SkillCompass
● Сообщество Evol-ai ⚡ Сразу

SkillCompass

автор Evol-ai · Evol-ai/SkillCompass

Evaluate Agent Skill quality — find the weakest link, fix it, and prove the fix worked with before/after metrics.

SkillCompass scores your Agent Skills on clarity, activation rate, downstream correctness, and context cost. It highlights the skill most likely to be hurting your agent's performance, suggests a fix, and re-runs the evaluation so you can show the improvement. Useful when you have a shelf of skills and don't know which are actually earning their context weight.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

skillcompass-skill.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "skillcompass-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "skillcompass-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Evol-ai/SkillCompass",
          "~/.claude/skills/SkillCompass"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add skillcompass-skill -- git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: SkillCompass

Find the skill pulling your agent's performance down

👤 Skill authors with 5+ skills installed ⏱ ~45 min advanced

Когда использовать: You feel the agent has gotten worse, not better, as you added skills.

Предварительные требования
  • Node 20+ — nvm install 20
  • Skill cloned and installed — git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass; npm i
Поток
  1. Run the evaluator
    Score all skills in ~/.claude/skills/ — show me the weakest link.✓ Скопировано
    → Ranked skill list with per-dimension scores
  2. Diagnose the loser
    For the weakest skill, what specifically is wrong?✓ Скопировано
    → Concrete critique (vague description, conflicting with other skill, etc.)
  3. Propose a fix
    Suggest a minimal edit to SKILL.md to fix it.✓ Скопировано
    → Small, reviewable diff
  4. Re-evaluate
    Re-run the eval and show before/after.✓ Скопировано
    → Metrics improved, with evidence

Итог: A measurably better skill bundle, with a reproducible eval process.

Подводные камни
  • Gaming the eval metric instead of helping real tasks — Include task-level downstream metrics (actual agent outcomes), not just text-level
Сочетать с: skill-optimizer-skill · filesystem

Review a new skill before you publish it

👤 Skill authors shipping their first bundle ⏱ ~20 min intermediate

Когда использовать: Before pushing to GitHub and telling the world about your skill.

Поток
  1. Score the draft
    Evaluate my draft skill at ./my-skill/.✓ Скопировано
    → Dimension scores
  2. Fix obvious issues
    Apply the low-hanging suggestions✓ Скопировано
    → Edits in SKILL.md

Итог: A publication-ready skill rather than a rough draft.

Подводные камни
  • Chasing a perfect score — Ship when scores plateau — diminishing returns

Комбинации

Сочетайте с другими MCP — эффект x10

skillcompass-skill + skill-optimizer-skill

Two complementary tools: SkillCompass ranks, skill-optimizer drills into SKILL.md patterns

Use SkillCompass to pick the worst skill; use skill-optimizer to deeply analyze its SKILL.md.✓ Скопировано
skillcompass-skill + filesystem

Operate across the full ~/.claude/skills/ directory

Evaluate every skill in ~/.claude/skills/ and give me a CSV.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
skill-scoring skill path(s) Periodic audits eval compute
weakest-link-id bundle scores After scoring 0
fix-suggestion weak skill + critique Before editing 0
before-after-eval pre/post SKILL.md After applying fixes eval compute

Стоимость и лимиты

Во что обходится

Квота API
none beyond your LLM provider (evals use LLM calls)
Токенов на вызов
evals can be heavy — budget 20–100k tokens for a full bundle scan
Деньги
free, MIT
Совет
Run on one skill at a time during iteration; bundle runs only for audits

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: none at skill level
Исходящий трафик: none beyond your LLM provider

Устранение неполадок

Частые ошибки и исправления

Node errors on install

Ensure Node 20+; npm i inside the skill directory.

Проверить: node -v
Evals are inconsistent run-to-run

Fix the task seed and use a non-stochastic sample; record provider+model.

Альтернативы

SkillCompass в сравнении

АльтернативаКогда использоватьКомпромисс
skill-optimizer-skillYou want a single skill analyzed deeply rather than a bundle rankedDepth over breadth
manual reviewYou have 1–2 skills totalDoesn't scale

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills