/ Directorio / Playground / AI-Research-SKILLs
● Comunidad Orchestra-Research ⚡ Instantáneo

AI-Research-SKILLs

por Orchestra-Research · Orchestra-Research/AI-Research-SKILLs

87 ML-research skills covering training, fine-tuning, distributed systems, inference, and paper writing — Claude becomes a credible ML infra collaborator.

A curated library of Agent Skills for AI research and engineering. Each skill (vLLM, DeepSpeed, Axolotl, TRL, Flash Attention, Unsloth, LLaMA-Factory, etc.) ships a SKILL.md with 50-150 line quick-refs plus 300KB+ of primary references. An autoresearch orchestrator skill routes between them for end-to-end experimentation.

Por qué usarlo

Características clave

Demo en vivo

Cómo se ve en la práctica

ai-research-skill.replay ▶ listo
0/0

Instalar

Elige tu cliente

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Abre Claude Desktop → Settings → Developer → Edit Config. Reinicia después de guardar.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Cursor usa el mismo esquema mcpServers que Claude Desktop. La configuración del proyecto prevalece sobre la global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Haz clic en el icono MCP Servers de la barra lateral de Cline y luego en "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Mismo formato que Claude Desktop. Reinicia Windsurf para aplicar.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "ai-research-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ]
    }
  ]
}

Continue usa un array de objetos de servidor en lugar de un mapa.

~/.config/zed/settings.json
{
  "context_servers": {
    "ai-research-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Orchestra-Research/AI-Research-SKILLs",
          "~/.claude/skills/AI-Research-SKILLs"
        ]
      }
    }
  }
}

Añádelo a context_servers. Zed recarga en caliente al guardar.

claude mcp add ai-research-skill -- git clone https://github.com/Orchestra-Research/AI-Research-SKILLs ~/.claude/skills/AI-Research-SKILLs

Un solo comando. Verifica con claude mcp list. Quita con claude mcp remove.

Casos de uso

Usos del mundo real: AI-Research-SKILLs

Fine-tune a Llama model with Unsloth + LoRA without Googling

👤 ML engineers running fine-tunes on a single GPU box ⏱ ~20 min intermediate

Cuándo usarlo: You have a dataset and want Claude to produce a runnable training script with correct flags.

Requisitos previos
  • Unsloth installed — pip install unsloth — the skill will remind you of the nightly build caveats
Flujo
  1. Describe the target run
    Use the unsloth skill. Fine-tune Llama-3-8B on my dataset at ~/data/train.jsonl, QLoRA, 3 epochs, output to ./out.✓ Copiado
    → Claude writes a script with correct Unsloth imports and real CLI flags
  2. Ask for the config explanation
    Walk me through each hyperparameter and why.✓ Copiado
    → Reasoning grounded in Unsloth's docs, not generic ML clichés

Resultado: A training script that runs on the first try.

Errores comunes
  • Claude mixes HF Trainer and Unsloth idioms — Insist on 'use the unsloth skill only' — don't pull from Transformers skill
Combinar con: filesystem

Set up a vLLM server with correct tensor-parallelism for your GPUs

👤 Infra engineers serving LLMs in production ⏱ ~30 min advanced

Cuándo usarlo: You have 2-8 GPUs and want Claude to pick the right --tensor-parallel-size and --max-model-len.

Flujo
  1. State the hardware and model
    Use the vllm skill. Serve Qwen2.5-72B on 4x H100s. Give me the exact launch command and sanity tests.✓ Copiado
    → Correct TP size and quantization recommendation
  2. Ask for load-test script
    Now give me a locust or vllm-benchmark script to verify throughput.✓ Copiado
    → Runnable benchmark using the right endpoint format

Resultado: A vLLM deployment with sanity checks and a benchmark baseline.

Errores comunes
  • Claude picks a TP size that doesn't divide the attention heads — The vLLM skill references list valid TP sizes per model family — have Claude cite that
Combinar con: aws

Draft an ML paper with correct structure and citation conventions

👤 Researchers writing for NeurIPS/ICML/ICLR ⏱ ~60 min advanced

Cuándo usarlo: You have experiment results and need a paper outline that matches the venue's style.

Flujo
  1. Provide results and venue
    Use the ml-paper-writing skill. Target ICLR. Here are my results [paste]. Draft the intro, method, and experiments sections.✓ Copiado
    → Structure follows venue conventions, ablation tables properly framed
  2. Revise for reviewer concerns
    What would Reviewer 2 push back on? Add preemptive responses.✓ Copiado
    → Concrete weaknesses, not generic 'add more experiments'

Resultado: A draft that passes initial sanity review and is worth polishing.

Errores comunes
  • Claude over-claims in the abstract — Explicitly tell it to mirror the restraint of top-tier accepted papers
Combinar con: arxiv

Combinaciones

Combínalo con otros MCPs para multiplicar por 10

ai-research-skill + filesystem

Store training configs, logs, and checkpoints while Claude drives experiments

Save the run config under experiments/<date>-<name>/ and tail the logs.✓ Copiado
ai-research-skill + arxiv

Pull related work while the ml-paper-writing skill drafts sections

Find 5 recent arXiv papers on GRPO and have the paper-writing skill weave them into the related work section.✓ Copiado

Herramientas

Lo que expone este MCP

HerramientaEntradasCuándo llamarCoste
Autoresearch orchestrator research goal Complex multi-phase experiments 0
Fine-tuning (Axolotl, LLaMA-Factory, Unsloth, PEFT) dataset + base model SFT / LoRA / QLoRA runs 0
Post-training (TRL, GRPO, OpenRLHF, SimPO, verl, slime) reward model or preference data Alignment, preference optimization 0
Distributed training (DeepSpeed, FSDP, Megatron, Accelerate) model + cluster topology Multi-node or multi-GPU training 0
Inference (vLLM, SGLang, TensorRT-LLM, llama.cpp) model + hardware Serving a model efficiently 0
Optimization (Flash Attention, bitsandbytes, GPTQ, AWQ, GGUF) model weights Fitting a model on smaller hardware 0

Coste y límites

Lo que cuesta ejecutarlo

Cuota de API
None for the skill itself
Tokens por llamada
Heavy — SKILL.md + references can load 5-10k tokens per sub-skill
Monetario
Free — skills are local files; you pay for compute you run
Consejo
Scope prompts to one sub-skill at a time; the full 87-skill library is too big to load simultaneously.

Seguridad

Permisos, secretos, alcance

Almacenamiento de credenciales: No credentials — skill is prompts + references
Salida de datos: None from the skill itself

Resolución de problemas

Errores comunes y soluciones

Claude blends two frameworks (e.g. Accelerate + DeepSpeed) incorrectly

Name one skill in the prompt and tell Claude to ignore the other.

References feel outdated

The repo is TeX-heavy and may lag the bleeding edge. For brand-new frameworks, supplement with a fresh fetch of the official docs.

Skill not auto-invoking on a relevant prompt

Mention the framework by name — 87 skills overlap and auto-routing is fuzzy.

Alternativas

AI-Research-SKILLs vs otros

AlternativaCuándo usarlaContrapartida
scientific-agent-skillYou need biology/chemistry/clinical, not ML training infraDifferent domain focus
huggingface MCPYou want live HF Hub operations rather than expert promptsMCP gives you real API actions; this skill teaches the patterns

Más

Recursos

📖 Lee el README oficial en GitHub

🐙 Ver issues abiertas

🔍 Ver todos los 400+ servidores MCP y Skills