/ 디렉터리 / 플레이그라운드 / mcp-client-for-ollama
● 커뮤니티 jonigl ⚡ 바로 사용

mcp-client-for-ollama

제작: jonigl · jonigl/mcp-client-for-ollama

TUI client that connects local Ollama models to any MCP server — agent mode, multi-server, human-in-the-loop, all from your terminal.

mcp-client-for-ollama (ollmcp) is a terminal UI that bridges Ollama's local LLMs with MCP servers. It supports agent mode (iterative tool execution), multi-server connections (STDIO/SSE/HTTP), human-in-the-loop approval, model switching, thinking mode, and performance metrics. For developers who want MCP tool-use without paying for cloud APIs.

왜 쓰나요

핵심 기능

라이브 데모

실제 사용 모습

client-for-ollama.replay ▶ 준비됨
0/0

설치

클라이언트 선택

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "client-for-ollama": {
      "command": "uvx",
      "args": [
        "mcp-client-for-ollama"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop → Settings → Developer → Edit Config 열기. 저장 후 앱 재시작.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "client-for-ollama": {
      "command": "uvx",
      "args": [
        "mcp-client-for-ollama"
      ],
      "_inferred": true
    }
  }
}

Cursor는 Claude Desktop과 동일한 mcpServers 스키마 사용. 프로젝트 설정이 전역보다 우선.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "client-for-ollama": {
      "command": "uvx",
      "args": [
        "mcp-client-for-ollama"
      ],
      "_inferred": true
    }
  }
}

Cline 사이드바의 MCP Servers 아이콘 클릭 후 "Edit Configuration" 선택.

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "client-for-ollama": {
      "command": "uvx",
      "args": [
        "mcp-client-for-ollama"
      ],
      "_inferred": true
    }
  }
}

Claude Desktop과 같은 형식. Windsurf 재시작 후 적용.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "client-for-ollama",
      "command": "uvx",
      "args": [
        "mcp-client-for-ollama"
      ]
    }
  ]
}

Continue는 맵이 아닌 서버 오브젝트 배열 사용.

~/.config/zed/settings.json
{
  "context_servers": {
    "client-for-ollama": {
      "command": {
        "path": "uvx",
        "args": [
          "mcp-client-for-ollama"
        ]
      }
    }
  }
}

context_servers에 추가. 저장 시 Zed가 핫 리로드.

claude mcp add client-for-ollama -- uvx mcp-client-for-ollama

한 줄 명령. claude mcp list로 확인, claude mcp remove로 제거.

사용 사례

실전 활용법: mcp-client-for-ollama

How to use MCP tools with local LLMs for free

👤 Developers who want MCP functionality without cloud API costs ⏱ ~15 min beginner

언제 쓸까: You have MCP servers configured but want to use them with a local model instead of Claude or GPT.

사전 조건
  • Ollama installed and running — ollama.com — install, then ollama pull llama3.2:3b
  • ollmcp installed — pip install ollmcp
흐름
  1. Launch with auto-discovery
    ollmcp --auto-discovery --model llama3.2:3b✓ 복사됨
    → TUI launches, shows discovered MCP servers from Claude config
  2. Test a tool call
    List the files in my current directory.✓ 복사됨
    → Model calls the filesystem MCP tool and returns results
  3. Enable agent mode for multi-step tasks
    Type /agent to enable agent mode, then: 'Find all TODO comments in this project and summarize them.'✓ 복사됨
    → Model iterates: searches files, reads matches, produces summary

결과: Working MCP tool-use powered entirely by a local model — zero API cost.

함정
  • Small models (3B) struggle with complex tool-use chains — Use 7B+ models for agent mode; 3B is fine for single tool calls
  • Model calls wrong tool or wrong params — Enable human-in-the-loop (/hil) to catch and correct bad tool calls
함께 쓰기: filesystem

Connect to multiple MCP servers for a cross-tool workflow

👤 Power users running several MCP servers ⏱ ~20 min intermediate

언제 쓸까: You want a local LLM to orchestrate across filesystem, GitHub, and other MCP servers in one session.

사전 조건
  • MCP server config JSON — Create a JSON file with all your server definitions (same format as Claude Desktop config)
흐름
  1. Launch with config
    ollmcp --servers-json ~/mcp-servers.json --model qwen2.5:7b✓ 복사됨
    → All servers connected, tools listed
  2. Use /tools to manage
    Type /tools to see all available tools across servers. Disable any you don't need.✓ 복사됨
    → Tool list with enable/disable toggles
  3. Run a cross-server task
    Read my project's README.md, then search GitHub for similar projects and compare their approaches.✓ 복사됨
    → Model chains filesystem read + GitHub search

결과: A local LLM orchestrating tools from multiple MCP servers in one conversation.

함정
  • Too many tools confuse smaller models — Disable unused tools with /tools — fewer tools means better tool selection
함께 쓰기: github

Use human-in-the-loop to safely test MCP servers

👤 Developers building or testing new MCP servers ⏱ ~15 min beginner

언제 쓸까: You're developing an MCP server and want to test tool calls with approval before execution.

흐름
  1. Enable HIL mode
    ollmcp -s ./my-mcp-server.py --model llama3.2:3b then type /hil to enable✓ 복사됨
    → Human-in-the-loop enabled
  2. Trigger tool calls
    Ask the model to use your server's tools. Each call pauses for your approval.✓ 복사됨
    → Tool call preview shown, waiting for y/n
  3. Review and iterate
    Reject bad calls, approve good ones. Check /show-metrics for timing.✓ 복사됨
    → Approved calls execute; rejected ones prompt retry

결과: A safe testing loop for MCP server development with full visibility into every tool call.

조합

다른 MCP와 조합해 10배 효율

client-for-ollama + filesystem

Local Ollama model reads and edits files via filesystem MCP — fully offline coding assistant

ollmcp --auto-discovery --model qwen2.5:7b then: 'Read src/main.py and add error handling to the parse function.'✓ 복사됨
client-for-ollama + github

Use a local model to search GitHub repos and create issues without cloud API costs

Connect GitHub MCP to ollmcp, then: 'Search our org for repos with no CI config and create tracking issues.'✓ 복사됨

비용 및 제한

운영 비용

API 쿼터
No external API quota — runs on local Ollama. MCP server limits apply per-server.
호출당 토큰
Depends on Ollama model context window (typically 2k-128k)
금액
Completely free. Ollama is free. ollmcp is MIT-licensed. You only pay for electricity.
This is the zero-cost option for MCP tool-use. Use smaller models (3B) for simple tasks, larger (7B+) for agent mode.

보안

권한, 시크릿, 파급범위

자격 증명 저장: MCP server credentials configured in the servers JSON file or environment variables — same as Claude Desktop config
데이터 외부 송신: Ollama runs locally. MCP servers egress to their own backends. No data leaves your machine by default unless an MCP server sends it.

문제 해결

자주 발생하는 오류와 해결

Connection refused to Ollama

Ensure Ollama is running: ollama serve. Default port is 11434. Use --host to specify a custom host if needed.

확인: curl http://localhost:11434/api/tags
Model doesn't call tools / ignores MCP

Not all models support tool-use well. Use qwen2.5:7b or llama3.1:8b which have good tool-use training. Avoid tiny models for complex chains.

확인: Test with a simple single-tool prompt first
MCP server fails to connect

Check your servers JSON config. Ensure the command path is absolute and the server binary/script exists. Run the server command manually to see errors.

확인: Run the MCP server command directly in a terminal
Auto-discovery finds no servers

ollmcp looks for Claude Desktop config at the standard path. On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

확인: ls ~/Library/Application\ Support/Claude/claude_desktop_config.json

대안

mcp-client-for-ollama 다른 것과 비교

대안언제 쓰나단점/장점
Claude DesktopYou want the best MCP experience and are willing to pay for Claude APICloud-based, costs money, but much better tool-use than local models
otermYou want an Ollama TUI without MCP focusNo MCP integration; just chat

더 보기

리소스

📖 GitHub에서 공식 README 읽기

🐙 열린 이슈 보기

🔍 400+ MCP 서버 및 Skills 전체 보기