/ Directory / Playground / Memory
● Official modelcontextprotocol ⚡ Instant

Memory

by modelcontextprotocol · modelcontextprotocol/servers

A persistent knowledge graph Claude writes to and reads from across chats — so it remembers your project, team, and preferences.

The reference Memory MCP. Stores entities (people, projects, things), their observations (facts), and typed relations between them, as a local JSON knowledge graph. Lets Claude remember specific facts ('our prod DB is named api-prod-01', 'Jamie prefers PR descriptions in bullet form') without relying on its context window.

Why use it

Key features

Live Demo

What it looks like in practice

memory.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "memory",
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "memory": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@modelcontextprotocol/server-memory"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add memory -- npx -y @modelcontextprotocol/server-memory

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Memory

How to stop re-explaining your project to Claude every morning

👤 Solo devs and power users who chat with Claude daily about the same codebase ⏱ ~15 min beginner

When to use: You keep pasting the same background paragraph: 'our stack is X, the prod DB is Y, we don't use Z' — at the start of every session.

Prerequisites
  • Memory MCP running with a persistent file path — Set MEMORY_FILE_PATH=/Users/you/.claude/memory.json so the graph survives restarts
Flow
  1. Seed the graph with core facts
    Create entities for: my company (Acme), our main repo (acme-corp/api), and our prod database (api-prod-01). Add observations like 'uses Postgres 16', 'hosted on AWS RDS us-east-1', 'backup window is 03:00 UTC'. Connect them with relations.✓ Copied
    → Entities and relations created, visible via read_graph
  2. Add preference observations as you go
    Remember: when I ask you to write commit messages for this repo, use imperative mood without prefix tags. Store this as an observation on the 'acme-corp/api' entity.✓ Copied
    → Observation added without re-creating the entity
  3. Test recall in a new session
    What do you know about api-prod-01?✓ Copied
    → Claude queries the graph and surfaces the facts, not a generic answer

Outcome: A personal knowledge graph that gets smarter every chat — the more you use Claude, the less boilerplate you type.

Pitfalls
  • Claude doesn't auto-use memory; it forgets to check — Add 'Always consult the memory graph at the start of each task about this project' to your system/project prompt
  • Graph grows messy — duplicate entities with slight name variations — Pick a naming convention (kebab-case), and periodically ask Claude to read_graph and dedupe
Combine with: filesystem · github

Build a lightweight CRM of your coworkers' preferences

👤 ICs and leads who work across many stakeholders ⏱ ~10 min beginner

When to use: You keep forgetting who prefers Slack vs email, who wants bullets vs prose, who's on which project.

Flow
  1. Create a person entity on first interaction
    Create a Person entity 'jamie-chen'. Observations: 'PM on checkout team', 'prefers Loom over docs', 'reviews happen Tue/Thu mornings PT'.✓ Copied
    → Entity visible via open_nodes
  2. Link people to projects with relations
    Add relation: jamie-chen --owns--> checkout-redesign-2026. And: alex-kim --reviews--> checkout-redesign-2026.✓ Copied
    → Relations appear in graph
  3. Query before writing something to them
    I'm about to draft an update for Jamie on the checkout redesign. What do I know about their communication preferences and the project?✓ Copied
    → Returns stored preferences, informs draft tone

Outcome: You stop asking 'who was that PM again?' and your async updates hit the right tone first try.

Pitfalls
  • Storing sensitive/personal info about real coworkers feels weird and could leak — Only store work-preference observations; never personal details. Treat the file as sensitive — it syncs with whatever you back it up to
Combine with: linear · github

Keep a research trail across many sessions

👤 Researchers, writers, anyone investigating a topic over weeks ⏱ ~20 min intermediate

When to use: You're researching a topic (market study, literature review, investigation) that spans many sessions and sources.

Flow
  1. Capture each finding as an observation on a topic entity
    I'm researching 'MCP adoption in enterprises'. Create that as an entity. Now add this finding as an observation: 'Anthropic reports 60% of Claude Code customers use 3+ MCPs (source: blog 2026-03-12)'.✓ Copied
    → Topic entity grows incrementally with cited observations
  2. Link related topics
    Create entity 'MCP security concerns'. Relate it to 'MCP adoption in enterprises' with relation 'blocks-adoption-when-unaddressed'.✓ Copied
    → Graph shows the semantic connection
  3. Ask for synthesis at any point
    Based on all observations connected to 'MCP adoption in enterprises', draft a 1-pager summary with citations.✓ Copied
    → Synthesis with per-claim sources, nothing fabricated

Outcome: A citable, incremental research asset that doesn't depend on context-window gymnastics.

Pitfalls
  • Observations stored without sources are indistinguishable from model confabulation later — Require every observation to include a source in the text ('source: X, date: Y'); reject ones without
Combine with: firecrawl · exa-search · fetch

Combinations

Pair with other MCPs for X10 leverage

memory + filesystem

Load long-form research docs, extract facts, and store them in memory for later synthesis

Read every .md file under /research/. For each key claim, add it as an observation on the relevant topic entity in memory. Include filename as source.✓ Copied
memory + github

Remember repo-specific conventions so future PR reviews apply them without re-explanation

From the last 10 merged PRs in acme/api, extract tone, length, and title conventions. Store as observations on the 'acme/api' entity.✓ Copied

Let long reasoning sessions persist their scratchpad findings across chats

Run a sequential-thinking session to plan our migration. At the end, write the conclusions as observations on the 'db-migration-q2' entity.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
create_entities entities: [{name, entityType, observations[]}] Introduce a new entity (person, project, system, concept) free
create_relations relations: [{from, to, relationType}] Connect two existing entities with a typed edge free
add_observations observations: [{entityName, contents[]}] Append facts to an existing entity (most common op) free
delete_entities entityNames: str[] Remove obsolete entities (also removes their relations) free
delete_observations deletions: [{entityName, observations[]}] Remove specific facts that turned out wrong free
delete_relations relations: [...] Remove edges without deleting entities free
read_graph none Dump the full graph — use sparingly once it grows free
search_nodes query: str Find entities by keyword across names/types/observations free
open_nodes names: str[] Pull specific entities by exact name free

Cost & Limits

What this costs to run

API quota
Unlimited — local JSON
Tokens per call
Small — graph operations return only changed nodes by default
Monetary
Free
Tip
Prefer search_nodes and open_nodes over read_graph once you have >50 entities, or you'll pay to load the whole graph every turn.

Security

Permissions, secrets, blast radius

Credential storage: No credentials. The graph file is whatever MEMORY_FILE_PATH points at.
Data egress: None from the server. Observations do ship to your LLM provider as context when Claude reads them.

Troubleshooting

Common errors and fixes

Memory doesn't persist across restarts

Set env var MEMORY_FILE_PATH to an absolute path like /Users/you/.claude/memory.json. Without it, the server uses a temp path.

Verify: Check your MCP client config for env vars; after restart, call `read_graph` and verify old entities return
Claude never consults memory on its own

The server exposes tools; the model still needs prompting. Add a project-level instruction like 'Before answering questions about <project>, call search_nodes for relevant context.'

Duplicate entities like 'Jamie' and 'jamie-chen'

Adopt a naming convention (kebab-case, or full-name). Periodically run read_graph and delete_entities on duplicates after add_observations merging facts.

Relation fails with 'entity not found'

Both endpoint entities must exist first. Create them with create_entities before create_relations.

Alternatives

Memory vs others

AlternativeWhen to use it insteadTradeoff
Qdrant MCPYou need semantic search over thousands of notes, not a hand-curated graphRequires running Qdrant; great for fuzzy recall, worse for explicit structured facts
Notion MCPYour 'memory' is really a shared team knowledge baseNetwork-bound, slower, needs API key — but humans can read/edit it too
Neo4j MCPYou're building a serious knowledge graph with complex queriesHeavier — needs a database; overkill for personal memory

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills