/ Verzeichnis / Playground / dspy-skills
● Community OmidZamani ⚡ Sofort

dspy-skills

von OmidZamani · OmidZamani/dspy-skills

Claude skills pack for DSPy — program language models, optimize prompts, build RAG pipelines systematically.

dspy-skills teaches Claude the DSPy mental model: signatures, modules, predictors, teleprompters, and evaluation loops. Instead of hand-crafting prompts, you describe the task via signatures and let DSPy's optimizers do the work — and Claude writes the DSPy code idiomatically rather than resorting to raw prompt templates.

Warum nutzen

Hauptfunktionen

Live-Demo

In der Praxis

dspy-skill.replay ▶ bereit
0/0

Installieren

Wählen Sie Ihren Client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "dspy-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/OmidZamani/dspy-skills",
        "~/.claude/skills/dspy-skills"
      ],
      "_inferred": true
    }
  }
}

Öffne Claude Desktop → Settings → Developer → Edit Config. Nach dem Speichern neu starten.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "dspy-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/OmidZamani/dspy-skills",
        "~/.claude/skills/dspy-skills"
      ],
      "_inferred": true
    }
  }
}

Cursor nutzt das gleiche mcpServers-Schema wie Claude Desktop. Projektkonfiguration schlägt die globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "dspy-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/OmidZamani/dspy-skills",
        "~/.claude/skills/dspy-skills"
      ],
      "_inferred": true
    }
  }
}

Klicken Sie auf das MCP-Servers-Symbol in der Cline-Seitenleiste, dann "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "dspy-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/OmidZamani/dspy-skills",
        "~/.claude/skills/dspy-skills"
      ],
      "_inferred": true
    }
  }
}

Gleiche Struktur wie Claude Desktop. Windsurf neu starten zum Übernehmen.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "dspy-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/OmidZamani/dspy-skills",
        "~/.claude/skills/dspy-skills"
      ]
    }
  ]
}

Continue nutzt ein Array von Serverobjekten statt einer Map.

~/.config/zed/settings.json
{
  "context_servers": {
    "dspy-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/OmidZamani/dspy-skills",
          "~/.claude/skills/dspy-skills"
        ]
      }
    }
  }
}

In context_servers hinzufügen. Zed lädt beim Speichern neu.

claude mcp add dspy-skill -- git clone https://github.com/OmidZamani/dspy-skills ~/.claude/skills/dspy-skills

Einzeiler. Prüfen mit claude mcp list. Entfernen mit claude mcp remove.

Anwendungsfälle

Praxisnahe Nutzung: dspy-skills

How to build your first DSPy program and optimize it

👤 ML engineers and applied researchers ⏱ ~90 min advanced

Wann einsetzen: You have a task where prompt quality matters and want a systematic way to improve it.

Voraussetzungen
  • Python 3.10+ with dspy-ai installed — pip install dspy-ai
  • Skill cloned — git clone https://github.com/OmidZamani/dspy-skills ~/.claude/skills/dspy-skills
Ablauf
  1. Define the signature
    Task: classify support tickets into {billing, technical, account}. Give me a DSPy signature and a simple Predict module.✓ Kopiert
    → Signature + module code
  2. Write an eval
    Add an evaluation set of 50 labeled examples and an accuracy metric.✓ Kopiert
    → Eval harness with metric callable
  3. Optimize
    Run BootstrapFewShot to compile the module against the eval set.✓ Kopiert
    → Compiled predictor + improved score

Ergebnis: A DSPy-optimized predictor that beats a hand-written prompt, with reproducible code.

Fallstricke
  • Eval too small — optimizer overfits — Minimum 100–200 examples; hold out a true test set
  • Metric doesn't capture what you care about — Spend on metric design before on model choice
Kombinieren mit: filesystem

Build a RAG pipeline with DSPy

👤 Engineers building retrieval-augmented systems ⏱ ~120 min advanced

Wann einsetzen: You want modular, optimizable RAG rather than a hand-wired chain.

Ablauf
  1. Define the modules
    Create a DSPy RAG pipeline: RetrieveThenRead with ColBERTv2 or a local retriever.✓ Kopiert
    → Modular pipeline with separate retrieval and generation
  2. Optimize end-to-end
    Write an eval on our QA set and run MIPRO to improve.✓ Kopiert
    → Compiled pipeline with score delta

Ergebnis: A RAG pipeline you can improve by changing the eval, not by rewriting prompts.

Fallstricke
  • Retriever quality caps end-to-end quality — Evaluate retrieval separately (recall@k) before optimizing generation
Kombinieren mit: local-rag

Kombinationen

Mit anderen MCPs für 10-fache Wirkung

dspy-skill + local-rag

Plug a local retriever into DSPy's RAG modules

Swap the ColBERT retriever for a local-rag MCP as the retrieval source.✓ Kopiert
dspy-skill + filesystem

Organize DSPy programs, evals, and artifacts in a repo

Lay out a DSPy project with programs/, evals/, and artifacts/ directories.✓ Kopiert

Werkzeuge

Was dieses MCP bereitstellt

WerkzeugEingabenWann aufrufenKosten
signature-design task spec Start of any DSPy program 0
module-authoring signatures + flow After signatures 0
teleprompter-optimization module + eval After eval is ready LLM tokens during optimization
evaluation-harness task data Before optimizing 0

Kosten & Limits

Was der Betrieb kostet

API-Kontingent
LLM tokens dominate during optimization runs
Tokens pro Aufruf
Can be high — optimizations may invoke the LLM hundreds of times
Kosten in €
depends on provider
Tipp
Use cheap models during teleprompter runs, upgrade only for final evaluation

Sicherheit

Rechte, Secrets, Reichweite

Credential-Speicherung: LLM provider keys in env vars
Datenabfluss: LLM provider endpoints

Fehlerbehebung

Häufige Fehler und Lösungen

Teleprompter seems to make things worse

Check metric correctness; use a held-out set; widen the example pool.

Optimization eats your budget

Cap max_bootstrapped_demos and use cheaper models during search.

Alternativen

dspy-skills vs. andere

AlternativeWann stattdessenKompromiss
prompt-architect-skillYou want prompt-level craft, not DSPy's programmatic approachHand-crafted vs optimized

Mehr

Ressourcen

📖 Offizielle README auf GitHub lesen

🐙 Offene Issues ansehen

🔍 Alle 400+ MCP-Server und Skills durchsuchen