/ Каталог / Песочница / AI-Research-SKILLs
● Сообщество Orchestra-Research ⚡ Сразу

AI-Research-SKILLs

автор Orchestra-Research · Orchestra-Research/AI-Research-SKILLs

87 ML-research skills covering training, fine-tuning, distributed systems, inference, and paper writing — Claude becomes a credible ML infra collaborator.

A curated library of Agent Skills for AI research and engineering. Each skill (vLLM, DeepSpeed, Axolotl, TRL, Flash Attention, Unsloth, LLaMA-Factory, etc.) ships a SKILL.md with 50-150 line quick-refs plus 300KB+ of primary references. An autoresearch orchestrator skill routes between them for end-to-end experimentation.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

ai-research-skill.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "ai-research-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ],
      "_inferred": true
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "ai-research-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Orchestra-Research/AI-Research-SKILLs",
        "~/.claude/skills/AI-Research-SKILLs"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "ai-research-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Orchestra-Research/AI-Research-SKILLs",
          "~/.claude/skills/AI-Research-SKILLs"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add ai-research-skill -- git clone https://github.com/Orchestra-Research/AI-Research-SKILLs ~/.claude/skills/AI-Research-SKILLs

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: AI-Research-SKILLs

Fine-tune a Llama model with Unsloth + LoRA without Googling

👤 ML engineers running fine-tunes on a single GPU box ⏱ ~20 min intermediate

Когда использовать: You have a dataset and want Claude to produce a runnable training script with correct flags.

Предварительные требования
  • Unsloth installed — pip install unsloth — the skill will remind you of the nightly build caveats
Поток
  1. Describe the target run
    Use the unsloth skill. Fine-tune Llama-3-8B on my dataset at ~/data/train.jsonl, QLoRA, 3 epochs, output to ./out.✓ Скопировано
    → Claude writes a script with correct Unsloth imports and real CLI flags
  2. Ask for the config explanation
    Walk me through each hyperparameter and why.✓ Скопировано
    → Reasoning grounded in Unsloth's docs, not generic ML clichés

Итог: A training script that runs on the first try.

Подводные камни
  • Claude mixes HF Trainer and Unsloth idioms — Insist on 'use the unsloth skill only' — don't pull from Transformers skill
Сочетать с: filesystem

Set up a vLLM server with correct tensor-parallelism for your GPUs

👤 Infra engineers serving LLMs in production ⏱ ~30 min advanced

Когда использовать: You have 2-8 GPUs and want Claude to pick the right --tensor-parallel-size and --max-model-len.

Поток
  1. State the hardware and model
    Use the vllm skill. Serve Qwen2.5-72B on 4x H100s. Give me the exact launch command and sanity tests.✓ Скопировано
    → Correct TP size and quantization recommendation
  2. Ask for load-test script
    Now give me a locust or vllm-benchmark script to verify throughput.✓ Скопировано
    → Runnable benchmark using the right endpoint format

Итог: A vLLM deployment with sanity checks and a benchmark baseline.

Подводные камни
  • Claude picks a TP size that doesn't divide the attention heads — The vLLM skill references list valid TP sizes per model family — have Claude cite that
Сочетать с: aws

Draft an ML paper with correct structure and citation conventions

👤 Researchers writing for NeurIPS/ICML/ICLR ⏱ ~60 min advanced

Когда использовать: You have experiment results and need a paper outline that matches the venue's style.

Поток
  1. Provide results and venue
    Use the ml-paper-writing skill. Target ICLR. Here are my results [paste]. Draft the intro, method, and experiments sections.✓ Скопировано
    → Structure follows venue conventions, ablation tables properly framed
  2. Revise for reviewer concerns
    What would Reviewer 2 push back on? Add preemptive responses.✓ Скопировано
    → Concrete weaknesses, not generic 'add more experiments'

Итог: A draft that passes initial sanity review and is worth polishing.

Подводные камни
  • Claude over-claims in the abstract — Explicitly tell it to mirror the restraint of top-tier accepted papers
Сочетать с: arxiv

Комбинации

Сочетайте с другими MCP — эффект x10

ai-research-skill + filesystem

Store training configs, logs, and checkpoints while Claude drives experiments

Save the run config under experiments/<date>-<name>/ and tail the logs.✓ Скопировано
ai-research-skill + arxiv

Pull related work while the ml-paper-writing skill drafts sections

Find 5 recent arXiv papers on GRPO and have the paper-writing skill weave them into the related work section.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
Autoresearch orchestrator research goal Complex multi-phase experiments 0
Fine-tuning (Axolotl, LLaMA-Factory, Unsloth, PEFT) dataset + base model SFT / LoRA / QLoRA runs 0
Post-training (TRL, GRPO, OpenRLHF, SimPO, verl, slime) reward model or preference data Alignment, preference optimization 0
Distributed training (DeepSpeed, FSDP, Megatron, Accelerate) model + cluster topology Multi-node or multi-GPU training 0
Inference (vLLM, SGLang, TensorRT-LLM, llama.cpp) model + hardware Serving a model efficiently 0
Optimization (Flash Attention, bitsandbytes, GPTQ, AWQ, GGUF) model weights Fitting a model on smaller hardware 0

Стоимость и лимиты

Во что обходится

Квота API
None for the skill itself
Токенов на вызов
Heavy — SKILL.md + references can load 5-10k tokens per sub-skill
Деньги
Free — skills are local files; you pay for compute you run
Совет
Scope prompts to one sub-skill at a time; the full 87-skill library is too big to load simultaneously.

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: No credentials — skill is prompts + references
Исходящий трафик: None from the skill itself

Устранение неполадок

Частые ошибки и исправления

Claude blends two frameworks (e.g. Accelerate + DeepSpeed) incorrectly

Name one skill in the prompt and tell Claude to ignore the other.

References feel outdated

The repo is TeX-heavy and may lag the bleeding edge. For brand-new frameworks, supplement with a fresh fetch of the official docs.

Skill not auto-invoking on a relevant prompt

Mention the framework by name — 87 skills overlap and auto-routing is fuzzy.

Альтернативы

AI-Research-SKILLs в сравнении

АльтернативаКогда использоватьКомпромисс
scientific-agent-skillYou need biology/chemistry/clinical, not ML training infraDifferent domain focus
huggingface MCPYou want live HF Hub operations rather than expert promptsMCP gives you real API actions; this skill teaches the patterns

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills