/ Annuaire / Playground / SkillCompass
● Communauté Evol-ai ⚡ Instantané

SkillCompass

par Evol-ai · Evol-ai/SkillCompass

Evaluate Agent Skill quality — find the weakest link, fix it, and prove the fix worked with before/after metrics.

SkillCompass scores your Agent Skills on clarity, activation rate, downstream correctness, and context cost. It highlights the skill most likely to be hurting your agent's performance, suggests a fix, and re-runs the evaluation so you can show the improvement. Useful when you have a shelf of skills and don't know which are actually earning their context weight.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

skillcompass-skill.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "skillcompass-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "skillcompass-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Evol-ai/SkillCompass",
        "~/.claude/skills/SkillCompass"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "skillcompass-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Evol-ai/SkillCompass",
          "~/.claude/skills/SkillCompass"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add skillcompass-skill -- git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : SkillCompass

Find the skill pulling your agent's performance down

👤 Skill authors with 5+ skills installed ⏱ ~45 min advanced

Quand l'utiliser : You feel the agent has gotten worse, not better, as you added skills.

Prérequis
  • Node 20+ — nvm install 20
  • Skill cloned and installed — git clone https://github.com/Evol-ai/SkillCompass ~/.claude/skills/SkillCompass; npm i
Déroulement
  1. Run the evaluator
    Score all skills in ~/.claude/skills/ — show me the weakest link.✓ Copié
    → Ranked skill list with per-dimension scores
  2. Diagnose the loser
    For the weakest skill, what specifically is wrong?✓ Copié
    → Concrete critique (vague description, conflicting with other skill, etc.)
  3. Propose a fix
    Suggest a minimal edit to SKILL.md to fix it.✓ Copié
    → Small, reviewable diff
  4. Re-evaluate
    Re-run the eval and show before/after.✓ Copié
    → Metrics improved, with evidence

Résultat : A measurably better skill bundle, with a reproducible eval process.

Pièges
  • Gaming the eval metric instead of helping real tasks — Include task-level downstream metrics (actual agent outcomes), not just text-level

Review a new skill before you publish it

👤 Skill authors shipping their first bundle ⏱ ~20 min intermediate

Quand l'utiliser : Before pushing to GitHub and telling the world about your skill.

Déroulement
  1. Score the draft
    Evaluate my draft skill at ./my-skill/.✓ Copié
    → Dimension scores
  2. Fix obvious issues
    Apply the low-hanging suggestions✓ Copié
    → Edits in SKILL.md

Résultat : A publication-ready skill rather than a rough draft.

Pièges
  • Chasing a perfect score — Ship when scores plateau — diminishing returns

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

skillcompass-skill + skill-optimizer-skill

Two complementary tools: SkillCompass ranks, skill-optimizer drills into SKILL.md patterns

Use SkillCompass to pick the worst skill; use skill-optimizer to deeply analyze its SKILL.md.✓ Copié
skillcompass-skill + filesystem

Operate across the full ~/.claude/skills/ directory

Evaluate every skill in ~/.claude/skills/ and give me a CSV.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
skill-scoring skill path(s) Periodic audits eval compute
weakest-link-id bundle scores After scoring 0
fix-suggestion weak skill + critique Before editing 0
before-after-eval pre/post SKILL.md After applying fixes eval compute

Coût et limites

Coût d'exécution

Quota d'API
none beyond your LLM provider (evals use LLM calls)
Tokens par appel
evals can be heavy — budget 20–100k tokens for a full bundle scan
Monétaire
free, MIT
Astuce
Run on one skill at a time during iteration; bundle runs only for audits

Sécurité

Permissions, secrets, portée

Stockage des identifiants : none at skill level
Sortie de données : none beyond your LLM provider

Dépannage

Erreurs courantes et correctifs

Node errors on install

Ensure Node 20+; npm i inside the skill directory.

Vérifier : node -v
Evals are inconsistent run-to-run

Fix the task seed and use a non-stochastic sample; record provider+model.

Alternatives

SkillCompass vs autres

AlternativeQuand l'utiliserCompromis
skill-optimizer-skillYou want a single skill analyzed deeply rather than a bundle rankedDepth over breadth
manual reviewYou have 1–2 skills totalDoesn't scale

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills