/ Directory / Playground / data-engineering-skills
● Community AltimateAI ⚡ Instant

data-engineering-skills

by AltimateAI · AltimateAI/data-engineering-skills

9 Claude Code skills for analytics engineering: 7 dbt workflows + 2 Snowflake query optimizers. 53% pass on real dbt tasks, 84% on Snowflake tuning.

Skills for the daily grind of analytics engineering. dbt skills cover creating, debugging, testing, documenting, migrating, refactoring, and incremental models. Snowflake skills find expensive queries and optimize either by text or by query_id. Philosophy: 'Read before you write. Build after you write. Verify your output.'

Why use it

Key features

Live Demo

What it looks like in practice

data-engineering-skill.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "data-engineering-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "data-engineering-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/AltimateAI/data-engineering-skills",
        "~/.claude/skills/data-engineering-skills"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "data-engineering-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/AltimateAI/data-engineering-skills",
          "~/.claude/skills/data-engineering-skills"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add data-engineering-skill -- git clone https://github.com/AltimateAI/data-engineering-skills ~/.claude/skills/data-engineering-skills

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use data-engineering-skills

Debug a failing dbt model without thrashing

👤 Analytics engineers facing a red CI run ⏱ ~20 min intermediate

When to use: dbt run just failed with a cryptic error and you don't know if it's schema, lineage, or SQL.

Prerequisites
  • dbt project accessible — cd into your dbt repo so Claude can see models/
  • Skill installed — git clone https://github.com/AltimateAI/data-engineering-skills ~/.claude/skills/data-engineering-skills
Flow
  1. Feed Claude the error + model
    Use debugging-dbt-errors. Here's the stderr and models/marts/fct_orders.sql. Diagnose the root cause — don't guess.✓ Copied
    → Claude reads upstream refs, diagnoses in order: schema → lineage → SQL
  2. Apply the fix and verify
    Apply the fix and run dbt build --select fct_orders+. Show me the before/after row counts.✓ Copied
    → Clean run + row count verification

Outcome: Green CI plus a note of the root cause so it doesn't recur.

Pitfalls
  • Fixing a symptom downstream when the bug is upstream — The skill enforces an upstream-first diagnosis; don't skip the lineage step
Combine with: bigquery-server · github

Find and fix your top expensive Snowflake queries

👤 Analytics leads with a climbing Snowflake bill ⏱ ~60 min intermediate

When to use: Finance flagged the Snowflake bill and you need to cut it without breaking dashboards.

Prerequisites
  • Snowflake role with ACCOUNT_USAGE access — ACCOUNTADMIN typically, or a dedicated cost role
Flow
  1. Identify worst offenders
    Use finding-expensive-queries to list the top 20 queries in the past 30 days by credit cost. Group by app/user.✓ Copied
    → Ranked table with credits, runtime, warehouse
  2. Optimize each top one
    For the top offender, use optimizing-query-by-id <query_id>. Propose rewrites with estimated savings.✓ Copied
    → Rewritten SQL + before/after explain plan
  3. Validate and deploy
    Run the rewrite in a test warehouse — confirm same row count and shape before we swap.✓ Copied
    → Safe swap candidate

Outcome: A prioritized list of fixes with measurable $ savings.

Pitfalls
  • Rewrites change row count silently — Always diff before deploying — the skill enforces this
Combine with: bigquery-server

Migrate a pile of stored procs into dbt models

👤 Teams moving off legacy SQL to dbt ⏱ ~90 min advanced

When to use: You've inherited a warehouse of nested CTEs and want them as documented, tested dbt models.

Flow
  1. Point the skill at the source SQL
    Use migrating-sql-to-dbt. Here's proc_monthly_revenue.sql. Convert it to dbt models with refs, documentation, and at least 2 tests per model.✓ Copied
    → One or more .sql files, schema.yml with docs and tests
  2. Build and verify
    dbt build the new models and compare row counts to the legacy output.✓ Copied
    → Row counts match within tolerance

Outcome: Legacy logic lives as testable dbt models.

Pitfalls
  • Hidden side effects in the proc (UPDATEs) — The skill flags side effects — separate them out, don't blindly convert
Combine with: github

Convert a slow full-refresh model to incremental

👤 Analytics engineers with long-running dbt runs ⏱ ~45 min advanced

When to use: A daily model has grown too big for full refresh.

Flow
  1. Analyze the model
    Use developing-incremental-models on models/events.sql. Pick a strategy (merge / insert_overwrite / delete+insert) and justify.✓ Copied
    → Strategy + unique_key + partition / cluster keys recommended
  2. Implement and back-fill
    Apply the incremental config; outline a safe back-fill plan.✓ Copied
    → Model + back-fill steps

Outcome: Daily runs that finish in minutes, not hours.

Pitfalls
  • unique_key gets duplicates on late data — Use merge and test it

Combinations

Pair with other MCPs for X10 leverage

data-engineering-skill + bigquery-server

Apply the same optimize-by-id pattern to BigQuery expensive queries

Adapt finding-expensive-queries for BigQuery INFORMATION_SCHEMA.JOBS and list top 20.✓ Copied
data-engineering-skill + github

Open a PR per migrated model so each is reviewable

For every migrated model, open a GitHub PR with dbt test output attached.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
creating-dbt-models model spec New model 0
debugging-dbt-errors error log, model CI or local run failed 0
testing-dbt-models model Untested model 0
documenting-dbt-models model Undocumented model 0
migrating-sql-to-dbt legacy SQL Legacy migration 0
refactoring-dbt-models model Hard-to-read model 0
developing-incremental-models full-refresh model Runtime too long 0
finding-expensive-queries lookback window Cost hunt ACCOUNT_USAGE query
optimizing-query-text SQL text Know the SQL, not the id 0
optimizing-query-by-id query_id Have the id from the UI 1 explain

Cost & Limits

What this costs to run

API quota
Snowflake queries cost credits like any other — ACCOUNT_USAGE reads are cheap
Tokens per call
5–15k per dbt skill invocation
Monetary
Free skill
Tip
Run finding-expensive-queries once weekly, not on every session

Security

Permissions, secrets, blast radius

Minimum scopes: dbt: read + write to your project Snowflake: ACCOUNT_USAGE for cost skills
Credential storage: dbt profiles.yml / Snowflake key-pair in env; the skill doesn't store secrets
Data egress: None from the skill directly
Never grant: SYSADMIN to the Claude session unless absolutely needed

Troubleshooting

Common errors and fixes

dbt compile succeeds, run fails with column not found

Stale lineage — dbt deps + dbt clean + dbt build --select model+

finding-expensive-queries returns nothing

ACCOUNT_USAGE has ~45min delay; also confirm role has SNOWFLAKE.ACCOUNT_USAGE

Verify: SHOW GRANTS TO ROLE <role>

Alternatives

data-engineering-skills vs others

AlternativeWhen to use it insteadTradeoff
dbt Cloud IDEYou prefer managed UI over terminalNo Claude in the loop
SQL query optimizers (Select.dev, etc.)You want visual query plansSeparate tool, separate context

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills