/ Каталог / Песочница / dbt
● Официальный dbt-labs 🔑 Нужен свой ключ

dbt

автор dbt-labs · dbt-labs/dbt-mcp

Inspect your dbt project and Cloud jobs from an agent — lineage, model health, metric queries, and CLI runs, all in one MCP.

dbt Labs' official MCP for dbt Core, dbt Fusion, and dbt Cloud/Platform. Read model/source/exposure metadata, query the Semantic Layer, trigger dbt Cloud jobs, and run dbt CLI commands (build, test, run). Covers the full 'what, where, why' of a data pipeline.

Зачем использовать

Ключевые функции

Живое демо

Как выглядит на практике

dbt.replay ▶ готово
0/0

Установка

Выберите клиент

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "dbt": {
      "command": "uvx",
      "args": [
        "dbt-mcp"
      ]
    }
  }
}

Откройте Claude Desktop → Settings → Developer → Edit Config. Перезапустите после сохранения.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "dbt": {
      "command": "uvx",
      "args": [
        "dbt-mcp"
      ]
    }
  }
}

Cursor использует ту же схему mcpServers, что и Claude Desktop. Конфиг проекта приоритетнее глобального.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "dbt": {
      "command": "uvx",
      "args": [
        "dbt-mcp"
      ]
    }
  }
}

Щёлкните значок MCP Servers на боковой панели Cline, затем "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "dbt": {
      "command": "uvx",
      "args": [
        "dbt-mcp"
      ]
    }
  }
}

Тот же формат, что и Claude Desktop. Перезапустите Windsurf для применения.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "dbt",
      "command": "uvx",
      "args": [
        "dbt-mcp"
      ]
    }
  ]
}

Continue использует массив объектов серверов, а не map.

~/.config/zed/settings.json
{
  "context_servers": {
    "dbt": {
      "command": {
        "path": "uvx",
        "args": [
          "dbt-mcp"
        ]
      }
    }
  }
}

Добавьте в context_servers. Zed перезагружается автоматически.

claude mcp add dbt -- uvx dbt-mcp

Однострочная команда. Проверить: claude mcp list. Удалить: claude mcp remove.

Сценарии использования

Реальные сценарии: dbt

Diagnose why a dbt model is failing and propose a fix

👤 Analytics engineers ⏱ ~15 min intermediate

Когда использовать: A scheduled dbt run failed. You need to know what broke and why, without opening 5 UIs.

Предварительные требования
  • dbt Cloud account + service token — dbt Cloud → Profile → API Tokens
  • Local dbt project checkout (if using CLI tools) — git clone your dbt repo
Поток
  1. Find the failing run
    List my last 10 job runs in dbt Cloud. Show which ones failed and their error summary.✓ Скопировано
    → Failed run IDs with timestamps
  2. Drill into the failing model
    For the failed run, which model failed first? Get its details (SQL, description) and its upstream lineage.✓ Скопировано
    → Failing model + dependency chain
  3. Propose fix
    Run the model locally with dbt compile. Inspect the compiled SQL for the error. Propose the minimum edit to fix.✓ Скопировано
    → Concrete SQL fix with rationale

Итог: A validated fix for a broken model in under 15 minutes.

Подводные камни
  • Cloud run failures can be environmental (connection/credentials), not code — Before editing SQL, check if the same model runs locally via run tool — if yes, it's infra not code
Сочетать с: sentry · linear

Answer a business question using the dbt Semantic Layer

👤 Analytics engineers enabling self-serve ⏱ ~10 min beginner

Когда использовать: A stakeholder asks 'what was MRR by plan last month?' — you have metrics defined in dbt SL.

Предварительные требования
  • Semantic Layer enabled on your dbt Platform project — dbt Cloud → Account Settings → Semantic Layer
Поток
  1. Find the metric
    List available metrics in our SL. I'm looking for MRR or monthly_revenue.✓ Скопировано
    → Matching metric found
  2. Check dimensions
    What dimensions is the MRR metric queryable by? I want to filter/group by plan and month.✓ Скопировано
    → Valid dimension list
  3. Query and interpret
    Query MRR for last month, grouped by plan. Format the result as a table and comment on biggest contributors.✓ Скопировано
    → Table + brief analysis

Итог: Stakeholder gets a trustworthy, governed answer in 2 minutes; no one wrote ad-hoc SQL.

Подводные камни
  • Querying by an unsupported dimension returns empty without clear error — Always call get_dimensions on the metric first; don't assume
Сочетать с: notion

Check impact before editing a core model

👤 Analytics engineers about to touch a foundational model ⏱ ~20 min intermediate

Когда использовать: You're about to change dim_customers. You need to know every downstream consumer first.

Поток
  1. Get lineage
    Get downstream lineage for dim_customers. Include models, exposures, and any metrics.✓ Скопировано
    → Full downstream graph
  2. Quantify impact
    For each downstream model, get its model_performance and model_health — which are critical (used by exposures, run daily)?✓ Скопировано
    → Priority list of what'll break if you screw up
  3. Plan the change
    Write a change plan: what tests to add, which downstream owners to notify (check exposures), and what to monitor after deploy.✓ Скопировано
    → Rollout plan

Итог: Changes to shared models ship with awareness, not blast radius surprises.

Подводные камни
  • Exposures only exist if you maintain them — silent downstream in BI tools isn't tracked — Combine with your BI tool's API (Looker, Tableau) to find real consumers; dbt only knows what it's told

Scaffold staging models from raw sources

👤 Analytics engineers onboarding a new source ⏱ ~30 min intermediate

Когда использовать: New Fivetran/source data lands. You need a stg_* model + yml for each table.

Предварительные требования
  • sources.yml entries for the new data — Define sources first; agent generates staging from there
Поток
  1. Generate source block
    Use generate_source for database 'raw', schema 'stripe'. Write the output to models/staging/stripe/_sources.yml.✓ Скопировано
    → Source yml populated with all tables
  2. Scaffold staging models
    For each source table, call generate_staging_model. Write each to models/staging/stripe/stg_stripe__<table>.sql.✓ Скопировано
    → One .sql per source table
  3. Add docs + tests
    For each new staging model, call generate_model_yaml. Add not_null tests on primary keys. Commit.✓ Скопировано
    → Clean, tested staging layer

Итог: A full staging layer in minutes; no copy-paste drift.

Подводные камни
  • Generated models use SELECT * which then pulls PII columns — After generation, explicitly list columns and exclude/hash any sensitive ones before merging
Сочетать с: git

Комбинации

Сочетайте с другими MCP — эффект x10

dbt + sentry

When a dbt model failure breaks downstream features, correlate with Sentry error spikes

Find failed dbt runs in the last 24h. For each, check Sentry for error spikes in services that depend on those models' tables.✓ Скопировано
dbt + linear

File Linear bugs for recurring dbt test failures

List dbt tests that have failed more than 3 times in the last week. For each, create a Linear bug in the Analytics team with the test details.✓ Скопировано
dbt + notion

Auto-document metrics into a Notion glossary

For every metric in our Semantic Layer, create or update a Notion page in the Metrics Glossary database with name, description, and owner.✓ Скопировано

Инструменты

Что предоставляет этот MCP

ИнструментВходные данныеКогда вызыватьСтоимость
list_metrics / get_dimensions / get_entities / query_metrics metric name, dimensions, filters Business-metric questions SL queries billable per dbt Cloud plan
execute_sql / text_to_sql sql or natural language Ad hoc SQL exploration with dbt context Warehouse credits
get_all_models / get_model_details / get_lineage model identifiers Discovery + impact analysis free
get_model_health / get_model_performance model id SRE-style checks on the data platform free
build / run / test / compile / parse / show / docs / list dbt CLI args Local dbt Core usage warehouse compute for run/test/build
list_jobs / trigger_job_run / get_job_details / cancel_job_run / retry_job_run / list_job_runs job/run IDs dbt Cloud operations 1 Admin API call
generate_source / generate_staging_model / generate_model_yaml source/model refs Scaffolding new models free
get_exposures / get_exposure_details exposure name Find downstream consumers documented as exposures free

Стоимость и лимиты

Во что обходится

Квота API
dbt Cloud Admin API: depends on plan. Semantic Layer: per plan limits.
Токенов на вызов
Lineage graphs + model lists can be large — paginate
Деньги
MCP is free; dbt Core is free; dbt Cloud/Platform is paid. Warehouse queries billed by your warehouse.
Совет
Use discovery tools (get_model_details, get_lineage) freely — those are metadata. Be careful with execute_sql / query_metrics which hit the warehouse.

Безопасность

Права, секреты, радиус поражения

Хранение учётных данных: dbt Cloud service token in env var; Core uses your profiles.yml
Исходящий трафик: dbt Cloud (cloud.getdbt.com) for Cloud tools; your warehouse for SQL tools

Устранение неполадок

Частые ошибки и исправления

401 on Admin API calls

Service token expired or missing the required account. Regenerate at dbt Cloud → Account Settings → Service Tokens.

Semantic Layer tools return 'not configured'

SL is a paid feature and must be enabled per project. Check dbt Cloud → Project Settings → Semantic Layer.

CLI tools (run/build) fail with 'No profile'

Set DBT_PROFILES_DIR to a directory containing profiles.yml, or run from the project root with a local profiles.yml.

Проверить: dbt debug
get_lineage returns empty

Manifest is stale. Run parse first to regenerate manifest.json.

Альтернативы

dbt в сравнении

АльтернативаКогда использоватьКомпромисс
SQLMesh MCPYou use SQLMesh instead of dbtDifferent transformation paradigm; not a direct swap
Direct warehouse MCPs (Snowflake, BigQuery)You only need raw SQL, not dbt metadataLose model/lineage/test awareness

Ещё

Ресурсы

📖 Читать официальный README на GitHub

🐙 Открытые задачи

🔍 Все 400+ MCP-серверов и Skills