/ Diretório / Playground / data-engineering-skills
● Comunidade AltimateAI ⚡ Instantâneo

data-engineering-skills

por AltimateAI · AltimateAI/data-engineering-skills

9 Claude Code skills for analytics engineering: 7 dbt workflows + 2 Snowflake query optimizers. 53% pass on real dbt tasks, 84% on Snowflake tuning.

Skills for the daily grind of analytics engineering. dbt skills cover creating, debugging, testing, documenting, migrating, refactoring, and incremental models. Snowflake skills find expensive queries and optimize either by text or by query_id. Philosophy: 'Read before you write. Build after you write. Verify your output.'

Por que usar

Principais recursos

Demo ao vivo

Como fica na prática

data-engineering-skill.replay ▶ pronto
0/0

Instalar

Escolha seu cliente

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Abra Claude Desktop → Settings → Developer → Edit Config. Reinicie após salvar.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Cursor usa o mesmo esquema mcpServers que o Claude Desktop. Config de projeto vence a global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Clique no ícone MCP Servers na barra lateral do Cline, depois "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Mesmo formato do Claude Desktop. Reinicie o Windsurf para aplicar.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "data-engineering-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ]
    }
  ]
}

O Continue usa um array de objetos de servidor em vez de um map.

~/.config/zed/settings.json
{
  "context_servers": {
    "data-engineering-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/AltimateAI/data-engineering-skills",
          "~/.claude/skills/data-engineering-skills"
        ]
      }
    }
  }
}

Adicione em context_servers. Zed recarrega automaticamente ao salvar.

claude mcp add data-engineering-skill -- git clone https://github.com/AltimateAI/data-engineering-skills ~/.claude/skills/data-engineering-skills

Uma linha só. Verifique com claude mcp list. Remova com claude mcp remove.

Casos de uso

Usos do mundo real: data-engineering-skills

Debug a failing dbt model without thrashing

👤 Analytics engineers facing a red CI run ⏱ ~20 min intermediate

Quando usar: dbt run just failed with a cryptic error and you don't know if it's schema, lineage, or SQL.

Pré-requisitos
  • dbt project accessible — cd into your dbt repo so Claude can see models/
  • Skill installed — git clone https://github.com/AltimateAI/data-engineering-skills ~/.claude/skills/data-engineering-skills
Fluxo
  1. Feed Claude the error + model
    Use debugging-dbt-errors. Here's the stderr and models/marts/fct_orders.sql. Diagnose the root cause — don't guess.✓ Copiado
    → Claude reads upstream refs, diagnoses in order: schema → lineage → SQL
  2. Apply the fix and verify
    Apply the fix and run dbt build --select fct_orders+. Show me the before/after row counts.✓ Copiado
    → Clean run + row count verification

Resultado: Green CI plus a note of the root cause so it doesn't recur.

Armadilhas
  • Fixing a symptom downstream when the bug is upstream — The skill enforces an upstream-first diagnosis; don't skip the lineage step
Combine com: bigquery-server · github

Find and fix your top expensive Snowflake queries

👤 Analytics leads with a climbing Snowflake bill ⏱ ~60 min intermediate

Quando usar: Finance flagged the Snowflake bill and you need to cut it without breaking dashboards.

Pré-requisitos
  • Snowflake role with ACCOUNT_USAGE access — ACCOUNTADMIN typically, or a dedicated cost role
Fluxo
  1. Identify worst offenders
    Use finding-expensive-queries to list the top 20 queries in the past 30 days by credit cost. Group by app/user.✓ Copiado
    → Ranked table with credits, runtime, warehouse
  2. Optimize each top one
    For the top offender, use optimizing-query-by-id <query_id>. Propose rewrites with estimated savings.✓ Copiado
    → Rewritten SQL + before/after explain plan
  3. Validate and deploy
    Run the rewrite in a test warehouse — confirm same row count and shape before we swap.✓ Copiado
    → Safe swap candidate

Resultado: A prioritized list of fixes with measurable $ savings.

Armadilhas
  • Rewrites change row count silently — Always diff before deploying — the skill enforces this
Combine com: bigquery-server

Migrate a pile of stored procs into dbt models

👤 Teams moving off legacy SQL to dbt ⏱ ~90 min advanced

Quando usar: You've inherited a warehouse of nested CTEs and want them as documented, tested dbt models.

Fluxo
  1. Point the skill at the source SQL
    Use migrating-sql-to-dbt. Here's proc_monthly_revenue.sql. Convert it to dbt models with refs, documentation, and at least 2 tests per model.✓ Copiado
    → One or more .sql files, schema.yml with docs and tests
  2. Build and verify
    dbt build the new models and compare row counts to the legacy output.✓ Copiado
    → Row counts match within tolerance

Resultado: Legacy logic lives as testable dbt models.

Armadilhas
  • Hidden side effects in the proc (UPDATEs) — The skill flags side effects — separate them out, don't blindly convert
Combine com: github

Convert a slow full-refresh model to incremental

👤 Analytics engineers with long-running dbt runs ⏱ ~45 min advanced

Quando usar: A daily model has grown too big for full refresh.

Fluxo
  1. Analyze the model
    Use developing-incremental-models on models/events.sql. Pick a strategy (merge / insert_overwrite / delete+insert) and justify.✓ Copiado
    → Strategy + unique_key + partition / cluster keys recommended
  2. Implement and back-fill
    Apply the incremental config; outline a safe back-fill plan.✓ Copiado
    → Model + back-fill steps

Resultado: Daily runs that finish in minutes, not hours.

Armadilhas
  • unique_key gets duplicates on late data — Use merge and test it

Combinações

Combine com outros MCPs para 10× de alavancagem

data-engineering-skill + bigquery-server

Apply the same optimize-by-id pattern to BigQuery expensive queries

Adapt finding-expensive-queries for BigQuery INFORMATION_SCHEMA.JOBS and list top 20.✓ Copiado
data-engineering-skill + github

Open a PR per migrated model so each is reviewable

For every migrated model, open a GitHub PR with dbt test output attached.✓ Copiado

Ferramentas

O que este MCP expõe

FerramentaEntradasQuando chamarCusto
creating-dbt-models model spec New model 0
debugging-dbt-errors error log, model CI or local run failed 0
testing-dbt-models model Untested model 0
documenting-dbt-models model Undocumented model 0
migrating-sql-to-dbt legacy SQL Legacy migration 0
refactoring-dbt-models model Hard-to-read model 0
developing-incremental-models full-refresh model Runtime too long 0
finding-expensive-queries lookback window Cost hunt ACCOUNT_USAGE query
optimizing-query-text SQL text Know the SQL, not the id 0
optimizing-query-by-id query_id Have the id from the UI 1 explain

Custo e limites

O que custa rodar

Cota de API
Snowflake queries cost credits like any other — ACCOUNT_USAGE reads are cheap
Tokens por chamada
5–15k per dbt skill invocation
Monetário
Free skill
Dica
Run finding-expensive-queries once weekly, not on every session

Segurança

Permissões, segredos, alcance

Escopos mínimos: dbt: read + write to your project Snowflake: ACCOUNT_USAGE for cost skills
Armazenamento de credenciais: dbt profiles.yml / Snowflake key-pair in env; the skill doesn't store secrets
Saída de dados: None from the skill directly
Nunca conceda: SYSADMIN to the Claude session unless absolutely needed

Solução de problemas

Erros comuns e correções

dbt compile succeeds, run fails with column not found

Stale lineage — dbt deps + dbt clean + dbt build --select model+

finding-expensive-queries returns nothing

ACCOUNT_USAGE has ~45min delay; also confirm role has SNOWFLAKE.ACCOUNT_USAGE

Verificar: SHOW GRANTS TO ROLE <role>

Alternativas

data-engineering-skills vs. outros

AlternativaQuando usarTroca
dbt Cloud IDEYou prefer managed UI over terminalNo Claude in the loop
SQL query optimizers (Select.dev, etc.)You want visual query plansSeparate tool, separate context

Mais

Recursos

📖 Leia o README oficial no GitHub

🐙 Ver issues abertas

🔍 Ver todos os 400+ servidores MCP e Skills