/ Annuaire / Playground / AI-Research-SKILLs
● Communauté Orchestra-Research ⚡ Instantané

AI-Research-SKILLs

par Orchestra-Research · Orchestra-Research/AI-Research-SKILLs

87 ML-research skills covering training, fine-tuning, distributed systems, inference, and paper writing — Claude becomes a credible ML infra collaborator.

A curated library of Agent Skills for AI research and engineering. Each skill (vLLM, DeepSpeed, Axolotl, TRL, Flash Attention, Unsloth, LLaMA-Factory, etc.) ships a SKILL.md with 50-150 line quick-refs plus 300KB+ of primary references. An autoresearch orchestrator skill routes between them for end-to-end experimentation.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

ai-research-skill.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "ai-research-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "ai-research-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Orchestra-Research/AI-Research-SKILLs",
          "~/.claude/skills/AI-Research-SKILLs"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add ai-research-skill -- git clone https://github.com/Orchestra-Research/AI-Research-SKILLs ~/.claude/skills/AI-Research-SKILLs

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : AI-Research-SKILLs

Fine-tune a Llama model with Unsloth + LoRA without Googling

👤 ML engineers running fine-tunes on a single GPU box ⏱ ~20 min intermediate

Quand l'utiliser : You have a dataset and want Claude to produce a runnable training script with correct flags.

Prérequis
  • Unsloth installed — pip install unsloth — the skill will remind you of the nightly build caveats
Déroulement
  1. Describe the target run
    Use the unsloth skill. Fine-tune Llama-3-8B on my dataset at ~/data/train.jsonl, QLoRA, 3 epochs, output to ./out.✓ Copié
    → Claude writes a script with correct Unsloth imports and real CLI flags
  2. Ask for the config explanation
    Walk me through each hyperparameter and why.✓ Copié
    → Reasoning grounded in Unsloth's docs, not generic ML clichés

Résultat : A training script that runs on the first try.

Pièges
  • Claude mixes HF Trainer and Unsloth idioms — Insist on 'use the unsloth skill only' — don't pull from Transformers skill
Combiner avec : filesystem

Set up a vLLM server with correct tensor-parallelism for your GPUs

👤 Infra engineers serving LLMs in production ⏱ ~30 min advanced

Quand l'utiliser : You have 2-8 GPUs and want Claude to pick the right --tensor-parallel-size and --max-model-len.

Déroulement
  1. State the hardware and model
    Use the vllm skill. Serve Qwen2.5-72B on 4x H100s. Give me the exact launch command and sanity tests.✓ Copié
    → Correct TP size and quantization recommendation
  2. Ask for load-test script
    Now give me a locust or vllm-benchmark script to verify throughput.✓ Copié
    → Runnable benchmark using the right endpoint format

Résultat : A vLLM deployment with sanity checks and a benchmark baseline.

Pièges
  • Claude picks a TP size that doesn't divide the attention heads — The vLLM skill references list valid TP sizes per model family — have Claude cite that
Combiner avec : aws

Draft an ML paper with correct structure and citation conventions

👤 Researchers writing for NeurIPS/ICML/ICLR ⏱ ~60 min advanced

Quand l'utiliser : You have experiment results and need a paper outline that matches the venue's style.

Déroulement
  1. Provide results and venue
    Use the ml-paper-writing skill. Target ICLR. Here are my results [paste]. Draft the intro, method, and experiments sections.✓ Copié
    → Structure follows venue conventions, ablation tables properly framed
  2. Revise for reviewer concerns
    What would Reviewer 2 push back on? Add preemptive responses.✓ Copié
    → Concrete weaknesses, not generic 'add more experiments'

Résultat : A draft that passes initial sanity review and is worth polishing.

Pièges
  • Claude over-claims in the abstract — Explicitly tell it to mirror the restraint of top-tier accepted papers
Combiner avec : arxiv

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

ai-research-skill + filesystem

Store training configs, logs, and checkpoints while Claude drives experiments

Save the run config under experiments/<date>-<name>/ and tail the logs.✓ Copié
ai-research-skill + arxiv

Pull related work while the ml-paper-writing skill drafts sections

Find 5 recent arXiv papers on GRPO and have the paper-writing skill weave them into the related work section.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
Autoresearch orchestrator research goal Complex multi-phase experiments 0
Fine-tuning (Axolotl, LLaMA-Factory, Unsloth, PEFT) dataset + base model SFT / LoRA / QLoRA runs 0
Post-training (TRL, GRPO, OpenRLHF, SimPO, verl, slime) reward model or preference data Alignment, preference optimization 0
Distributed training (DeepSpeed, FSDP, Megatron, Accelerate) model + cluster topology Multi-node or multi-GPU training 0
Inference (vLLM, SGLang, TensorRT-LLM, llama.cpp) model + hardware Serving a model efficiently 0
Optimization (Flash Attention, bitsandbytes, GPTQ, AWQ, GGUF) model weights Fitting a model on smaller hardware 0

Coût et limites

Coût d'exécution

Quota d'API
None for the skill itself
Tokens par appel
Heavy — SKILL.md + references can load 5-10k tokens per sub-skill
Monétaire
Free — skills are local files; you pay for compute you run
Astuce
Scope prompts to one sub-skill at a time; the full 87-skill library is too big to load simultaneously.

Sécurité

Permissions, secrets, portée

Stockage des identifiants : No credentials — skill is prompts + references
Sortie de données : None from the skill itself

Dépannage

Erreurs courantes et correctifs

Claude blends two frameworks (e.g. Accelerate + DeepSpeed) incorrectly

Name one skill in the prompt and tell Claude to ignore the other.

References feel outdated

The repo is TeX-heavy and may lag the bleeding edge. For brand-new frameworks, supplement with a fresh fetch of the official docs.

Skill not auto-invoking on a relevant prompt

Mention the framework by name — 87 skills overlap and auto-routing is fuzzy.

Alternatives

AI-Research-SKILLs vs autres

AlternativeQuand l'utiliserCompromis
scientific-agent-skillYou need biology/chemistry/clinical, not ML training infraDifferent domain focus
huggingface MCPYou want live HF Hub operations rather than expert promptsMCP gives you real API actions; this skill teaches the patterns

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills