/ Annuaire / Playground / llm-context.py
● Communauté cyberchitta ⚡ Instantané

llm-context.py

par cyberchitta · cyberchitta/llm-context.py

Share just the right slice of your codebase with any LLM — rule-driven file selection, outlines, and on-demand fetches, not 'paste it all'.

llm-context.py is a rule-based code-sharing tool that exposes its output via MCP or clipboard. Instead of uploading your whole repo, you define composable rules (filter, instruction, style, excerpt) per task and ship a focused context. The MCP flavor lets the LLM ask for more files on demand.

Pourquoi l'utiliser

Fonctionnalités clés

Démo en direct

Aperçu en pratique

llm-context-py.replay ▶ prêt
0/0

Installer

Choisissez votre client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "llm-context-py": {
      "command": "uvx",
      "args": [
        "llm-context.py"
      ],
      "_inferred": true
    }
  }
}

Ouvrez Claude Desktop → Settings → Developer → Edit Config. Redémarrez après avoir enregistré.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "llm-context-py": {
      "command": "uvx",
      "args": [
        "llm-context.py"
      ],
      "_inferred": true
    }
  }
}

Cursor utilise le même schéma mcpServers que Claude Desktop. La config projet l'emporte sur la globale.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "llm-context-py": {
      "command": "uvx",
      "args": [
        "llm-context.py"
      ],
      "_inferred": true
    }
  }
}

Cliquez sur l'icône MCP Servers dans la barre latérale Cline, puis "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "llm-context-py": {
      "command": "uvx",
      "args": [
        "llm-context.py"
      ],
      "_inferred": true
    }
  }
}

Même format que Claude Desktop. Redémarrez Windsurf pour appliquer.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "llm-context-py",
      "command": "uvx",
      "args": [
        "llm-context.py"
      ]
    }
  ]
}

Continue utilise un tableau d'objets serveur plutôt qu'une map.

~/.config/zed/settings.json
{
  "context_servers": {
    "llm-context-py": {
      "command": {
        "path": "uvx",
        "args": [
          "llm-context.py"
        ]
      }
    }
  }
}

Ajoutez dans context_servers. Zed recharge à chaud à la sauvegarde.

claude mcp add llm-context-py -- uvx llm-context.py

Une seule ligne. Vérifiez avec claude mcp list. Supprimez avec claude mcp remove.

Cas d'usage

Usages concrets : llm-context.py

Ship a focused code-review context to your LLM

👤 Developers tired of pasting 3000 lines ⏱ ~15 min intermediate

Quand l'utiliser : You're about to ask 'review my auth module' and don't want the whole repo in context.

Prérequis
  • llm-context.py installed — uv tool install 'llm-context>=0.6.0'
  • Initialized in your repolc-init in your repo root
Déroulement
  1. Create a filter rule for auth
    Create an lc filter rule 'flt-auth' that includes src/auth/** and src/middleware/auth*.ts.✓ Copié
    → Rule file created under .lc/
  2. Preview what the rule selects
    Run lc_preview on flt-auth — show me which files will be sent and total token count.✓ Copié
    → File list + token count
  3. Share context with the LLM via MCP
    Using the flt-auth rule, review the module for security issues. If you need to see a specific file not included, ask via lc_missing.✓ Copié
    → Review with targeted file requests

Résultat : A code review that fits in context and can still explore — no manual paste.

Pièges
  • Rule too narrow → LLM can't understand callers of your module — Include interface / types of neighboring modules, use outlines for the rest
  • Rule too broad → token budget blown — Start broad, watch lc_preview, tighten until under your client's limits
Combiner avec : filesystem

Give an LLM a structural map of a huge codebase

👤 Engineers onboarding to unfamiliar monorepos ⏱ ~20 min intermediate

Quand l'utiliser : 100k+ LOC repo; you need orientation, not all the code.

Déroulement
  1. Generate outline
    lc_outlines for the whole repo — classes, top-level functions, exports per file. Skip bodies.✓ Copié
    → Skeletal outline with a few thousand lines
  2. Ask orientation questions
    Given the outline, where's the entry point, where's routing defined, and which files hold the data-layer abstractions?✓ Copié
    → Architectural answers
  3. Drill in on one area
    Show me the full contents of the three files that define routing via lc_missing.✓ Copié
    → Specific files expand into context

Résultat : A guided tour without ever pasting the whole repo.

Attach project style rules to every prompt

👤 Teams with specific conventions ⏱ ~10 min beginner

Quand l'utiliser : You want Claude to always know 'we use pytest, not unittest' without repeating.

Déroulement
  1. Write a style rule
    Create sty-python rule: 'pytest only, type hints required, black formatting'.✓ Copié
    → Rule saved
  2. Apply automatically
    Use prm-default which composes flt-current + sty-python + ins-standards for every context.✓ Copié
    → Rule auto-attached

Résultat : Per-task conventions enforced without manual boilerplate.

Combiner avec : drift

Combinaisons

Associez-le à d'autres MCPs pour un effet X10

llm-context-py + drift

drift records conventions; llm-context pushes only relevant files + style rules per task

Load drift conventions for this repo, then use lc with rule flt-auth + sty-ts to review the auth module.✓ Copié
llm-context-py + filesystem

After reviewing, filesystem applies edits

Based on the llm-context review, use filesystem to apply the suggested edits to src/auth/.✓ Copié

Outils

Ce que ce MCP expose

OutilEntréesQuand appelerCoût
lc_outlines rule?: str Orient on large codebases free (local)
lc_preview rule: str Before shipping context, verify scope free
lc_missing path: str LLM calls this mid-conversation to request a file free

Coût et limites

Coût d'exécution

Quota d'API
None
Tokens par appel
Depends entirely on your rule scope — that's the point
Monétaire
Free, open source
Astuce
Always run lc_preview before lc_outlines/shipping context — one second of preview saves thousands of tokens.

Sécurité

Permissions, secrets, portée

Stockage des identifiants : None
Sortie de données : Only to whichever LLM client/provider you pipe context into

Dépannage

Erreurs courantes et correctifs

No rules found

Run lc-init in repo root to create .lc/ scaffolding.

Vérifier : ls .lc/
lc_preview token count surprisingly high

Your filter is too loose or includes generated files. Narrow globs and add ignore patterns.

Vérifier : lc_preview again
MCP tool not available

Use uvx --from llm-context lc-mcp in your MCP server config.

Vérifier : claude mcp list

Alternatives

llm-context.py vs autres

AlternativeQuand l'utiliserCompromis
repomix / ai-digestYou want a single-file dump, not MCP toolsNo interactive lc_missing; static snapshot
filesystem MCPYou want raw file accessNo rule-based selection or outlines
driftYou want persistent convention memory, not per-task file bundlingDifferent problem entirely

Plus

Ressources

📖 Lire le README officiel sur GitHub

🐙 Voir les issues ouvertes

🔍 Parcourir les 400+ serveurs MCP et Skills