/ Diretório / Playground / AI-Research-SKILLs
● Comunidade Orchestra-Research ⚡ Instantâneo

AI-Research-SKILLs

por Orchestra-Research · Orchestra-Research/AI-Research-SKILLs

87 ML-research skills covering training, fine-tuning, distributed systems, inference, and paper writing — Claude becomes a credible ML infra collaborator.

A curated library of Agent Skills for AI research and engineering. Each skill (vLLM, DeepSpeed, Axolotl, TRL, Flash Attention, Unsloth, LLaMA-Factory, etc.) ships a SKILL.md with 50-150 line quick-refs plus 300KB+ of primary references. An autoresearch orchestrator skill routes between them for end-to-end experimentation.

Por que usar

Principais recursos

Demo ao vivo

Como fica na prática

ai-research-skill.replay ▶ pronto
0/0

Instalar

Escolha seu cliente

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Abra Claude Desktop → Settings → Developer → Edit Config. Reinicie após salvar.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Cursor usa o mesmo esquema mcpServers que o Claude Desktop. Config de projeto vence a global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Clique no ícone MCP Servers na barra lateral do Cline, depois "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Mesmo formato do Claude Desktop. Reinicie o Windsurf para aplicar.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "ai-research-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ]
    }
  ]
}

O Continue usa um array de objetos de servidor em vez de um map.

~/.config/zed/settings.json
{
  "context_servers": {
    "ai-research-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Orchestra-Research/AI-Research-SKILLs",
          "~/.claude/skills/AI-Research-SKILLs"
        ]
      }
    }
  }
}

Adicione em context_servers. Zed recarrega automaticamente ao salvar.

claude mcp add ai-research-skill -- git clone https://github.com/Orchestra-Research/AI-Research-SKILLs ~/.claude/skills/AI-Research-SKILLs

Uma linha só. Verifique com claude mcp list. Remova com claude mcp remove.

Casos de uso

Usos do mundo real: AI-Research-SKILLs

Fine-tune a Llama model with Unsloth + LoRA without Googling

👤 ML engineers running fine-tunes on a single GPU box ⏱ ~20 min intermediate

Quando usar: You have a dataset and want Claude to produce a runnable training script with correct flags.

Pré-requisitos
  • Unsloth installed — pip install unsloth — the skill will remind you of the nightly build caveats
Fluxo
  1. Describe the target run
    Use the unsloth skill. Fine-tune Llama-3-8B on my dataset at ~/data/train.jsonl, QLoRA, 3 epochs, output to ./out.✓ Copiado
    → Claude writes a script with correct Unsloth imports and real CLI flags
  2. Ask for the config explanation
    Walk me through each hyperparameter and why.✓ Copiado
    → Reasoning grounded in Unsloth's docs, not generic ML clichés

Resultado: A training script that runs on the first try.

Armadilhas
  • Claude mixes HF Trainer and Unsloth idioms — Insist on 'use the unsloth skill only' — don't pull from Transformers skill
Combine com: filesystem

Set up a vLLM server with correct tensor-parallelism for your GPUs

👤 Infra engineers serving LLMs in production ⏱ ~30 min advanced

Quando usar: You have 2-8 GPUs and want Claude to pick the right --tensor-parallel-size and --max-model-len.

Fluxo
  1. State the hardware and model
    Use the vllm skill. Serve Qwen2.5-72B on 4x H100s. Give me the exact launch command and sanity tests.✓ Copiado
    → Correct TP size and quantization recommendation
  2. Ask for load-test script
    Now give me a locust or vllm-benchmark script to verify throughput.✓ Copiado
    → Runnable benchmark using the right endpoint format

Resultado: A vLLM deployment with sanity checks and a benchmark baseline.

Armadilhas
  • Claude picks a TP size that doesn't divide the attention heads — The vLLM skill references list valid TP sizes per model family — have Claude cite that
Combine com: aws

Draft an ML paper with correct structure and citation conventions

👤 Researchers writing for NeurIPS/ICML/ICLR ⏱ ~60 min advanced

Quando usar: You have experiment results and need a paper outline that matches the venue's style.

Fluxo
  1. Provide results and venue
    Use the ml-paper-writing skill. Target ICLR. Here are my results [paste]. Draft the intro, method, and experiments sections.✓ Copiado
    → Structure follows venue conventions, ablation tables properly framed
  2. Revise for reviewer concerns
    What would Reviewer 2 push back on? Add preemptive responses.✓ Copiado
    → Concrete weaknesses, not generic 'add more experiments'

Resultado: A draft that passes initial sanity review and is worth polishing.

Armadilhas
  • Claude over-claims in the abstract — Explicitly tell it to mirror the restraint of top-tier accepted papers
Combine com: arxiv

Combinações

Combine com outros MCPs para 10× de alavancagem

ai-research-skill + filesystem

Store training configs, logs, and checkpoints while Claude drives experiments

Save the run config under experiments/<date>-<name>/ and tail the logs.✓ Copiado
ai-research-skill + arxiv

Pull related work while the ml-paper-writing skill drafts sections

Find 5 recent arXiv papers on GRPO and have the paper-writing skill weave them into the related work section.✓ Copiado

Ferramentas

O que este MCP expõe

FerramentaEntradasQuando chamarCusto
Autoresearch orchestrator research goal Complex multi-phase experiments 0
Fine-tuning (Axolotl, LLaMA-Factory, Unsloth, PEFT) dataset + base model SFT / LoRA / QLoRA runs 0
Post-training (TRL, GRPO, OpenRLHF, SimPO, verl, slime) reward model or preference data Alignment, preference optimization 0
Distributed training (DeepSpeed, FSDP, Megatron, Accelerate) model + cluster topology Multi-node or multi-GPU training 0
Inference (vLLM, SGLang, TensorRT-LLM, llama.cpp) model + hardware Serving a model efficiently 0
Optimization (Flash Attention, bitsandbytes, GPTQ, AWQ, GGUF) model weights Fitting a model on smaller hardware 0

Custo e limites

O que custa rodar

Cota de API
None for the skill itself
Tokens por chamada
Heavy — SKILL.md + references can load 5-10k tokens per sub-skill
Monetário
Free — skills are local files; you pay for compute you run
Dica
Scope prompts to one sub-skill at a time; the full 87-skill library is too big to load simultaneously.

Segurança

Permissões, segredos, alcance

Armazenamento de credenciais: No credentials — skill is prompts + references
Saída de dados: None from the skill itself

Solução de problemas

Erros comuns e correções

Claude blends two frameworks (e.g. Accelerate + DeepSpeed) incorrectly

Name one skill in the prompt and tell Claude to ignore the other.

References feel outdated

The repo is TeX-heavy and may lag the bleeding edge. For brand-new frameworks, supplement with a fresh fetch of the official docs.

Skill not auto-invoking on a relevant prompt

Mention the framework by name — 87 skills overlap and auto-routing is fuzzy.

Alternativas

AI-Research-SKILLs vs. outros

AlternativaQuando usarTroca
scientific-agent-skillYou need biology/chemistry/clinical, not ML training infraDifferent domain focus
huggingface MCPYou want live HF Hub operations rather than expert promptsMCP gives you real API actions; this skill teaches the patterns

Mais

Recursos

📖 Leia o README oficial no GitHub

🐙 Ver issues abertas

🔍 Ver todos os 400+ servidores MCP e Skills