/ Directory / Playground / Postgres
● Official modelcontextprotocol 🔑 Needs your key

Postgres

by modelcontextprotocol · modelcontextprotocol/servers

Let Claude query your Postgres database with read-only safety, full schema introspection, and EXPLAIN plans — without ever giving it write access.

The reference Postgres MCP server. Connects with a standard postgres:// URL, exposes schema browsing, query execution, and EXPLAIN. Strictly read-only — no INSERT/UPDATE/DELETE/DDL — making it safe to point at production read replicas.

Why use it

Key features

Live Demo

What it looks like in practice

postgres.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://..."
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://..."
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://..."
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://..."
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "postgres",
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://..."
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "postgres": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@modelcontextprotocol/server-postgres",
          "postgresql://..."
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add postgres -- npx -y @modelcontextprotocol/server-postgres postgresql://...

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Postgres

Answer ad-hoc business questions without touching SQL

👤 PMs, founders, anyone who'd rather not write SQL ⏱ ~10 min beginner

When to use: You have a question about your data ('how many users came back this week?') and the BI dashboard doesn't have it.

Prerequisites
  • Read-only postgres:// connection string to a replica — Most managed PG (RDS, Neon, Supabase) lets you create read-only credentials
  • Network access from where Claude runs to the DB — VPN or IP allowlist your machine
Flow
  1. Have Claude introspect the relevant tables first
    List all tables in our DB. For tables related to users, orders, or sessions, describe their schemas.✓ Copied
    → Schema overview before any query
  2. Ask the actual question
    How many users signed up in the last 30 days but haven't placed an order yet? Group by signup week.✓ Copied
    → Claude writes SQL, runs it, returns results table
  3. Probe for caveats
    Are there any reasons this number could be misleading? Soft deletes? Timezone in created_at? Specific user types we should exclude?✓ Copied
    → Honest call-out of data quirks

Outcome: A defensible answer to a business question with the SQL, the result, and the caveats — in 2 minutes instead of waiting 2 days for the data team.

Pitfalls
  • Claude writes a query that scans your largest table without limits — Set statement_timeout = '30s' on the connection, and add 'always include LIMIT 1000 by default' to your system prompt
  • Counting 'users' depends on what counts as a user (deleted? bot? test?) — Tell Claude your conventions upfront: 'exclude rows where deleted_at IS NOT NULL' etc.
Combine with: notion

Diagnose why a query is slow and suggest indexes

👤 Backend engineers, DBAs ⏱ ~15 min intermediate

When to use: You have a query that's slower than it should be. You want a second pair of eyes that doesn't get tired reading EXPLAIN ANALYZE output.

Flow
  1. Get the query plan
    Run EXPLAIN ANALYZE on this query: [paste]. Walk me through what the planner is doing.✓ Copied
    → Step-by-step plan walkthrough
  2. Identify the cost driver
    Which step is responsible for most of the cost? Is it a sequential scan, a bad join order, or expensive filtering?✓ Copied
    → Specific node identified with reason
  3. Suggest an index or rewrite
    Suggest the smallest change to make this fast. Prefer adding an index over rewriting the query, but only if the index would be useful for >1 query.✓ Copied
    → Concrete CREATE INDEX statement OR rewritten query

Outcome: An indexed-or-rewritten query, with reasoning, that you can verify by running EXPLAIN again.

Pitfalls
  • EXPLAIN on a non-representative dataset (small dev DB) gives misleading plans — Always run EXPLAIN against a database with production-shaped data; otherwise the plan is fiction
  • Adding an index seems free but slows down writes — Tell Claude to verify the index would be used by checking with EXPLAIN BEFORE asking you to add it
Combine with: sentry

Audit a table for data quality issues

👤 Data engineers, anyone inheriting an unfamiliar schema ⏱ ~25 min intermediate

When to use: You're about to build a feature on top of a table you didn't design and you suspect it has problems.

Flow
  1. Run a battery of NULL / duplicate / orphan checks
    For the orders table: count NULL values per column, count duplicate rows by some natural key (e.g. (user_id, stripe_payment_intent_id)), count rows with foreign keys pointing to deleted parent rows.✓ Copied
    → Issue counts per check
  2. Check for value distribution oddities
    What are the min, max, and percentile distribution for total_cents? Are there suspiciously many rows with 0 or negative values?✓ Copied
    → Distribution stats, outliers flagged
  3. Cross-check against expected business rules
    Every 'completed' order should have a non-null paid_at. Are there exceptions?✓ Copied
    → Count of violations + sample IDs

Outcome: A short list of concrete data integrity bugs, each with a count and a fix path.

Pitfalls
  • Some 'issues' are intentional historical artifacts (data migrations) — Always confirm with someone who knows the history before assuming it's a bug

Auto-generate schema documentation for your team

👤 Tech leads onboarding new engineers ⏱ ~20 min beginner

When to use: Your DB has 40 tables, the wiki has 0. New hires keep asking 'what's this column?'

Flow
  1. Get all tables and their schemas
    List every table in the public schema. For each, give me columns, types, nullability, defaults, and any foreign keys.✓ Copied
    → Complete schema dump
  2. Infer purpose from naming + sample data
    For each table, sample 3 rows and write a one-paragraph description of what this table represents in our business.✓ Copied
    → Per-table prose explanation
  3. Flag unknown / suspicious tables
    Are there any tables that look unused, or that you can't infer a purpose for? List them so I can ask the original author.✓ Copied
    → Honest 'I don't know what these are' list

Outcome: A Markdown doc your team can drop into Notion or a wiki — covering 80% of what new hires need to know.

Pitfalls
  • Sampling sensitive data (PII) into the LLM context — For tables with PII, ask Claude to describe schemas only without sampling rows
Combine with: notion · filesystem

Compute A/B test results from raw event data

👤 Product analysts, growth engineers ⏱ ~30 min advanced

When to use: You ran an experiment, the data is in your DB, and you want significance numbers without writing SQL by hand.

Prerequisites
  • Events table with experiment assignment + conversion events — Standard schema: events(user_id, experiment, variant, timestamp), conversions(user_id, type, timestamp)
Flow
  1. Compute conversion rate per variant
    For experiment 'checkout-redesign-2026': how many users were assigned to each variant, and what was the conversion rate (by [your conversion event]) per variant?✓ Copied
    → Per-variant table with rates
  2. Calculate statistical significance
    Compute the chi-squared p-value for the difference between control and treatment. Is the result statistically significant at p < 0.05?✓ Copied
    → p-value with verdict
  3. Sanity check the numbers
    Are sample sizes balanced? Did the experiment run long enough? Any segments where the result reverses?✓ Copied
    → Health check, not just p-value

Outcome: A statistically defensible A/B test readout with the SQL, the numbers, and the caveats.

Pitfalls
  • Peeking at results before pre-defined sample size leads to false positives — Make Claude check whether the test reached its target sample size before computing significance
Combine with: notion

Combinations

Pair with other MCPs for X10 leverage

postgres + notion

Run a query, post results as a Notion table for stakeholders who don't have DB access

Query our top 10 customers by lifetime revenue this quarter, then create a Notion page in 'Sales Reports' with the results as a formatted table.✓ Copied
postgres + sentry

Cross-reference DB state with errors — when an error mentions a record ID, look it up

Sentry issue WEB-3a91 mentions order_id 99214. Look up that order and tell me if anything in the row data could explain the crash.✓ Copied
postgres + filesystem

Export query results as CSV/JSON for downstream use

Run my churn-cohort query and save the result as /reports/churn-2026-04.csv.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
list_tables schema?: str First step in any session — discover the schema free
describe_table table: str, schema?: str Get full structure of a specific table before querying free
query sql: str Run any read-only SQL — SELECT only depends on query

Cost & Limits

What this costs to run

API quota
Bounded by your DB's connection limit and query timeout
Tokens per call
Schema queries: ~500 tokens. Result sets: depends on row count — cap with LIMIT
Monetary
Free — costs are whatever your DB hosting bill already is
Tip
Always set a statement_timeout on the connection (e.g. ?options=-c%20statement_timeout%3D30000) so a runaway query can't take down your DB.

Security

Permissions, secrets, blast radius

Minimum scopes: SELECT on the tables you want exposed
Credential storage: Connection string in env var. Use a dedicated read-only role: CREATE ROLE claude_readonly LOGIN PASSWORD '...'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO claude_readonly;
Data egress: All queries to your DB; result rows shipped to whatever LLM provider you use
Never grant: INSERT UPDATE DELETE DROP TRUNCATE ALTER

Troubleshooting

Common errors and fixes

FATAL: password authentication failed

Check the connection string. Common cause: special chars in password not URL-encoded.

Verify: psql 'postgres://...' -c 'SELECT 1'
no pg_hba.conf entry / SSL required

Append ?sslmode=require to the connection string. Most managed Postgres requires SSL.

permission denied for table X

The role doesn't have SELECT on that table. Run GRANT SELECT ON X TO claude_readonly.

Verify: psql -c '\dp X'
canceling statement due to statement timeout

The query was too slow. Either optimize it (add an index, narrow the WHERE clause), or raise the timeout for that one connection.

Alternatives

Postgres vs others

AlternativeWhen to use it insteadTradeoff
Supabase MCPYou're on Supabase — get full project management plus SQLIncludes write access; less safe for prod
Neon MCPYou're on Neon — adds branching for safe migration testingNeon-specific features only work on Neon DBs
dbHubYou need multi-database support (Postgres, MySQL, MongoDB, etc.) in one MCPNewer; supports more DBs but each integration is shallower
sqlite MCPLocal file-based DB instead of a serverNo concurrent access, no network, but zero setup

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills