/ 디렉터리 / 플레이그라운드 / lean-ctx
● 커뮤니티 yvgude ⚡ 바로 사용

lean-ctx

제작: yvgude · yvgude/lean-ctx

Cut AI coding token costs by up to 99% — Rust MCP server with 42 tools for cached reads, context compression, and smart file modes.

lean-ctx is a single Rust binary that reduces AI coding costs through two mechanisms: a shell hook that compresses CLI output (60-95% savings) and an MCP server with 42 tools for cached file reads, context optimization, and multi-agent coordination. Supports 24 AI coding tools (Cursor, Claude Code, Copilot, Windsurf, etc.) with zero telemetry.

왜 쓰나요

핵심 기능

라이브 데모

실제 사용 모습

lean-ctx.replay ▶ 준비됨
0/0

설치

클라이언트 선택

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "lean-ctx": {
      "command": "TODO",
      "args": [
        "See README: https://github.com/yvgude/lean-ctx"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config 열기. 저장 후 앱 재시작.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "lean-ctx": {
      "command": "TODO",
      "args": [
        "See README: https://github.com/yvgude/lean-ctx"
      ],
      "_inferred": true
    }
  }
}

Cursor는 Claude Desktop과 동일한 mcpServers 스키마 사용. 프로젝트 설정이 전역보다 우선.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "lean-ctx": {
      "command": "TODO",
      "args": [
        "See README: https://github.com/yvgude/lean-ctx"
      ],
      "_inferred": true
    }
  }
}

Cline 사이드바의 MCP Servers 아이콘 클릭 후 "Edit Configuration" 선택.

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "lean-ctx": {
      "command": "TODO",
      "args": [
        "See README: https://github.com/yvgude/lean-ctx"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop과 같은 형식. Windsurf 재시작 후 적용.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "lean-ctx",
      "command": "TODO",
      "args": [
        "See README: https://github.com/yvgude/lean-ctx"
      ]
    }
  ]
}

Continue는 맵이 아닌 서버 오브젝트 배열 사용.

~/.config/zed/settings.json
{
  "context_servers": {
    "lean-ctx": {
      "command": {
        "path": "TODO",
        "args": [
          "See README: https://github.com/yvgude/lean-ctx"
        ]
      }
    }
  }
}

context_servers에 추가. 저장 시 Zed가 핫 리로드.

claude mcp add lean-ctx -- TODO 'See README: https://github.com/yvgude/lean-ctx'

한 줄 명령. claude mcp list로 확인, claude mcp remove로 제거.

사용 사례

실전 활용법: lean-ctx

How to cut your AI coding costs by 80%+ with lean-ctx

👤 Developers paying for Claude, Cursor, or Copilot API tokens ⏱ ~10 min beginner

언제 쓸까: Your monthly AI coding bill is growing and you want to reduce it without changing your workflow.

사전 조건
  • lean-ctx installed — brew install lean-ctx or curl -fsSL https://leanctx.com/install.sh | sh
흐름
  1. One-command setup
    Run: lean-ctx setup✓ 복사됨
    → Shell hooks installed, all detected editors configured automatically
  2. Code normally for a day
    Use your AI coding tool as usual — lean-ctx works transparently in the background.✓ 복사됨
    → No workflow change; compression happens silently
  3. Check savings
    Run: lean-ctx gain✓ 복사됨
    → Dashboard showing token savings per category (file reads, git, shell, etc.)

결과: Measurable token reduction (typically 60-90%) with zero workflow friction.

함정
  • Compressed output confuses some edge-case prompts — Use lean-ctx-off to temporarily disable, or use ctx_read with 'full' mode for specific files

Read large codebases efficiently with 8 file modes

👤 AI coding tool users working on large repos ⏱ ~5 min beginner

언제 쓸까: Your AI assistant wastes tokens reading entire files when it only needs the structure or signature.

흐름
  1. Use map mode for overview
    lean-ctx read src/main.rs -m map✓ 복사됨
    → File structure at ~10% of full token cost
  2. Use signatures mode for API
    lean-ctx read src/lib.rs -m signatures✓ 복사됨
    → Just function/type signatures, no bodies
  3. Use diff mode for changes
    lean-ctx read src/main.rs -m diff✓ 복사됨
    → Only lines changed since last read — minimal tokens

결과: The AI gets exactly the context it needs at a fraction of the token cost.

함정
  • Aggressive mode strips too much for complex refactoring — Use 'task' mode instead — it preserves context relevant to the current task
함께 쓰기: filesystem

Share context between AI agents without re-reading everything

👤 Power users running multiple AI agents on the same project ⏱ ~15 min intermediate

언제 쓸까: You're using Claude Code for architecture and Cursor for implementation, and they keep re-reading the same files.

흐름
  1. Build context in agent A
    In Claude Code: analyze the architecture of this project using ctx_overview.✓ 복사됨
    → Context built and cached by lean-ctx
  2. Share to agent B
    In Cursor: use ctx_share to pull the architecture context from the last session.✓ 복사됨
    → Cursor gets the architecture map without re-reading all files
  3. Track costs across agents
    lean-ctx gain --all-sessions✓ 복사됨
    → Combined savings across both agents

결과: Multiple agents share a cached context layer, avoiding redundant file reads.

함정
  • Stale cache after file changes — lean-ctx tracks file modification times — cache invalidates automatically on change

조합

다른 MCP와 조합해 10배 효율

lean-ctx + filesystem

Use lean-ctx for token-efficient reads and filesystem MCP for writes — best of both worlds

Use ctx_read to analyze the codebase structure efficiently, then use filesystem MCP to apply the refactoring changes.✓ 복사됨
lean-ctx + github

lean-ctx compresses local context, GitHub MCP handles remote operations — minimizing total tokens

Use ctx_overview to understand the project, then use GitHub MCP to create a well-informed PR description.✓ 복사됨

도구

이 MCP가 노출하는 것

도구입력언제 호출비용
ctx_read file: str, mode: full|map|signatures|diff|aggressive|entropy|task|lines Read a file with controlled verbosity 0 — local cached read
ctx_multi_read files: str[], mode?: str Read multiple files in one call 0
ctx_tree path: str Get project structure 0
ctx_search query: str, path?: str Find patterns in code 0
ctx_smart_read file: str, intent: str Read with task-aware filtering 0
ctx_shell command: str Run shell commands with output compression 0
ctx_overview path: str Get a high-level project overview 0
ctx_session - Manage context across conversation turns 0
ctx_cost - Track how much you've saved 0

비용 및 제한

운영 비용

API 쿼터
No external API — everything runs locally.
호출당 토큰
That's the point: ctx_read in map mode uses ~10% of the tokens a full read would. Shell compression saves 60-95%.
금액
Free and open-source (MIT). Saves $30-100+/month on AI coding API costs for active developers.
Use 'map' mode as default for exploration, 'full' only when you need every line. The savings compound fast.

보안

권한, 시크릿, 파급범위

자격 증명 저장: No credentials needed. Purely local tool.
데이터 외부 송신: Zero. No telemetry, no analytics, no network requests. Everything stays on your machine.

문제 해결

자주 발생하는 오류와 해결

Shell commands produce garbled output

Run lean-ctx setup to update hooks to the latest version. If persists, run lean-ctx-off to disable temporarily.

확인: lean-ctx doctor
MCP server not detected by editor

Run lean-ctx setup again — it auto-configures all detected editors. For manual setup: claude mcp add lean-ctx lean-ctx

확인: lean-ctx doctor
Binary not found after install

Shell aliases auto-fallback safely. Re-run the install script or brew reinstall lean-ctx.

확인: which lean-ctx
ctx_read returns stale content

lean-ctx tracks file mtime for cache invalidation. If you edited outside the normal flow, touch the file to reset the mtime.

확인: lean-ctx cache --status

대안

lean-ctx 다른 것과 비교

대안언제 쓰나단점/장점
Rust Token KillerYou want a similar concept with different tradeoffsFewer tools (~50 patterns vs 42 tools), 3 editor support vs 24, has default-on telemetry with PII
Manual .cursorrules / CLAUDE.md context managementYou prefer manual context curationNo automatic compression; no caching; more work

더 보기

리소스

📖 GitHub에서 공식 README 읽기

🐙 열린 이슈 보기

🔍 400+ MCP 서버 및 Skills 전체 보기