/ Directory / Playground / MARM-Systems
● Community Lyellr88 ⚡ Instant

MARM-Systems

by Lyellr88 · Lyellr88/MARM-Systems

Persistent, searchable memory across Claude, Qwen, Gemini, and any MCP client — sessions, notebooks, and semantic recall in one server.

MARM-Systems is a Python MCP server that gives any AI client a persistent memory layer: sessions, notebooks, auto-classifying contextual logs, and semantic vector search. STDIO, HTTP, and WebSocket transports. SQLite backend with WAL mode for concurrent access.

Why use it

Key features

Live Demo

What it looks like in practice

marm-systems.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "marm-systems": {
      "command": "uvx",
      "args": [
        "MARM-Systems"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "marm-systems": {
      "command": "uvx",
      "args": [
        "MARM-Systems"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "marm-systems": {
      "command": "uvx",
      "args": [
        "MARM-Systems"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "marm-systems": {
      "command": "uvx",
      "args": [
        "MARM-Systems"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "marm-systems",
      "command": "uvx",
      "args": [
        "MARM-Systems"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "marm-systems": {
      "command": {
        "path": "uvx",
        "args": [
          "MARM-Systems"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add marm-systems -- uvx MARM-Systems

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use MARM-Systems

Build persistent memory across sessions on a long project

👤 Anyone working a multi-week effort with an AI ⏱ ~15 min beginner

When to use: You keep re-explaining the same project background every session.

Prerequisites
  • MARM-Systems installed and running — docker pull lyellr88/marm-mcp-server && docker run -d -p 8001:8001 lyellr88/marm-mcp-server
Flow
  1. Start a session tagged to the project
    marm_start with project tag 'dataplatform-migration'. Log that we're migrating from Redshift to Snowflake, deadline end of Q2.✓ Copied
    → Session started; initial entry saved
  2. Drop context as you work
    marm_contextual_log: 'Decided to use Fivetran for CDC replication, evaluated Airbyte but config overhead too high.'✓ Copied
    → Auto-classified and stored
  3. Next session, recall
    marm_smart_recall 'what did we decide about CDC tooling?'✓ Copied
    → Relevant past decision surfaces

Outcome: Session N+1 starts with all of session 1-N context accessible by query, not re-type.

Pitfalls
  • Dumping every chat into memory clogs recall — Use marm_contextual_log for decisions and milestones, not every exchange
  • Vector search misses on jargon — Tag entries explicitly with project names for keyword fallback
Combine with: drift

Share memory across a team using multiple AI assistants

👤 Teams where different members use Claude / Qwen / Gemini ⏱ ~45 min advanced

When to use: Knowledge trapped in one person's AI shouldn't be.

Flow
  1. Run MARM as a shared HTTP server
    Deploy MARM-Systems on a team server; each teammate points their MCP client at http://marm.team.internal:8001/mcp.✓ Copied
    → All clients connect
  2. Log shared context
    marm_contextual_log: 'Database backup runbook is at /runbooks/db-backup.md; last updated 2026-04-10.'✓ Copied
    → Everyone can recall it
  3. Recall from any client
    From a teammate's Qwen session: marm_smart_recall 'how do I restore the DB'?✓ Copied
    → Same answer surfaces

Outcome: Cross-AI shared institutional memory.

Pitfalls
  • Shared memory needs real OAuth — Don't rely on the dev hardcoded credentials — wire to your IdP before prod use
  • Sensitive data leaks into shared recall — Use project tags and scope queries by tag

Use notebooks as AI-queryable scratchpads

👤 Learners, researchers ⏱ ~5 min beginner

When to use: You want a topic-scoped note area the AI can search later.

Flow
  1. Create a notebook
    marm_notebook_add name='rust-ownership' with entry: 'Move vs borrow: move transfers ownership, borrow lets you peek.'✓ Copied
    → Notebook created
  2. Add as you learn
    marm_notebook_add 'rust-ownership': 'Mutable borrow is exclusive; only one at a time.'✓ Copied
    → Entry appended
  3. Recall later
    marm_notebook_show 'rust-ownership'✓ Copied
    → Full notebook

Outcome: A queryable learning journal.

Combinations

Pair with other MCPs for X10 leverage

marm-systems + drift

drift remembers code conventions; MARM remembers project decisions

Recall drift conventions AND MARM decisions for project 'dataplatform-migration' and summarize where they interact.✓ Copied
marm-systems + claude-code-organizer

Move short-lived memories out of CC into MARM for long-term recall

Organizer listed my 12 biggest memories — move the 5 project-specific ones into MARM notebooks.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
marm_start project?, tags? Start of any working session free (local)
marm_refresh session_id? Reload context mid-session free
marm_smart_recall query: str, top_k?: int Semantic search past entries free (local embeddings)
marm_contextual_log text: str, tags?: str[] Persist a decision/milestone/fact free
marm_log_session session_id Review a past session free
marm_notebook_add name: str, entry: str Topic-scoped notes free
marm_notebook_use name: str Pin a topic for the session free
marm_notebook_show name: str Read a notebook free
marm_summary scope: session|notebook|tag, id Condense long history free (local)
marm_context_bridge from_session, to_session Cross-project reasoning free

Cost & Limits

What this costs to run

API quota
None — local-only
Tokens per call
Recall responses 500-2000 tokens typically
Monetary
Free, open source
Tip
Rotate or archive old sessions periodically — SQLite handles scale but recall quality degrades with noise.

Security

Permissions, secrets, blast radius

Credential storage: Local SQLite; for shared HTTP mode use env-based OAuth secrets
Data egress: None by default (local storage); if you run shared, only within your network
Never grant: Exposing port 8001 to the public internet without an auth layer

Troubleshooting

Common errors and fixes

Cannot connect to localhost:8001

Container not running. docker ps to check; docker start marm-mcp-server.

Verify: curl http://localhost:8001/docs
Embeddings model download fails

First run downloads the model; needs outbound net. After that, offline works.

Verify: Check docker logs for HuggingFace download
Recall returns unrelated entries

Too few entries for good embeddings; add more, or pre-filter by tag.

Verify: marm_smart_recall with tag filter

Alternatives

MARM-Systems vs others

AlternativeWhen to use it insteadTradeoff
driftYou want code-specific memory (conventions, decisions)Code-focused; less general
Letta (MemGPT)You want a research-grade memory agent, not just a serverHeavier to run; opinionated architecture
CC native memoriesClaude Code only; no cross-client needNo semantic search; bloats context

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills