/ Directory / Playground / Agentic-SEO-Skill
● Community Bhanunamikaze ⚡ Instant

Agentic-SEO-Skill

by Bhanunamikaze · Bhanunamikaze/Agentic-SEO-Skill

LLM-first SEO with 16 sub-skills, 10 specialist agents, 33 evidence-collector scripts — works in Antigravity, Codex, and Claude Code.

An evidence-first SEO skill: collect page data via utility scripts, analyze with an LLM that must cite proofs, apply confidence labels, prioritize by impact, and produce structured action plans. Enforces current Google standards (INP over FID, full E-E-A-T). Notable: a GitHub-analyst agent audits repo-hosted sites and writes GITHUB-SEO-REPORT.md.

Why use it

Key features

Live Demo

What it looks like in practice

agentic-seo-skill.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "agentic-seo-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
        "~/.claude/skills/Agentic-SEO-Skill"
      ],
      "_inferred": true
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "agentic-seo-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
        "~/.claude/skills/Agentic-SEO-Skill"
      ],
      "_inferred": true
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "agentic-seo-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
        "~/.claude/skills/Agentic-SEO-Skill"
      ],
      "_inferred": true
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "agentic-seo-skill": {
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
        "~/.claude/skills/Agentic-SEO-Skill"
      ],
      "_inferred": true
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "agentic-seo-skill",
      "command": "git",
      "args": [
        "clone",
        "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
        "~/.claude/skills/Agentic-SEO-Skill"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "agentic-seo-skill": {
      "command": {
        "path": "git",
        "args": [
          "clone",
          "https://github.com/Bhanunamikaze/Agentic-SEO-Skill",
          "~/.claude/skills/Agentic-SEO-Skill"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add agentic-seo-skill -- git clone https://github.com/Bhanunamikaze/Agentic-SEO-Skill ~/.claude/skills/Agentic-SEO-Skill

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Agentic-SEO-Skill

Run an evidence-backed SEO audit with confidence labels

👤 SEO analysts tired of 'trust me' recommendations ⏱ ~60 min intermediate

When to use: You want findings you can defend to a skeptical engineering lead.

Flow
  1. Trigger the full audit
    Use agentic-seo-skill on https://site.com — full evidence-backed audit.✓ Copied
    → Scripts collect raw data (meta tags, schema, link graph, CWV); agents analyze
  2. Review findings with proofs
    Show the top 10 issues with their supporting evidence and confidence labels.✓ Copied
    → Each finding has a literal quote/data point behind it
  3. Prioritized action plan
    Order by impact × effort.✓ Copied
    → Ranked plan with numeric rationale

Outcome: An audit where every finding is citable.

Pitfalls
  • Low-confidence findings treated as high priority — Use the confidence labels — skip 'low' until high/medium are done
Combine with: firecrawl

Audit a GitHub-hosted docs site with the GitHub-analyst agent

👤 Dev-tool teams with docs on GitHub Pages ⏱ ~40 min intermediate

When to use: Your docs live in a repo and you want a SEO audit tied to the repo structure.

Flow
  1. Point the GitHub agent at the repo
    agentic-seo-skill — audit github.com/acme/docs site. Output GITHUB-SEO-REPORT.md.✓ Copied
    → Report in the repo format with per-file recommendations
  2. Open a PR with fixes
    Turn the high-confidence findings into a PR.✓ Copied
    → PR with concrete diffs

Outcome: A SEO-improved docs site with a trackable PR.

Pitfalls
  • PR touches too many files at once — Split by finding type (meta vs content vs schema)
Combine with: github

Optimize pages for Perplexity / ChatGPT / AI Overviews citations

👤 Content teams losing clicks to AI summaries ⏱ ~30 min intermediate

When to use: You want to be the cited source, not just another organic result.

Flow
  1. Run the GEO/AEO sub-skill
    agentic-seo-skill — GEO audit on https://site.com/post.✓ Copied
    → Findings tied to snippet-friendliness, entity clarity, citation signals
  2. Apply and verify
    Apply recommendations and re-verify.✓ Copied
    → Score deltas with evidence

Outcome: Pages restructured for AI-citation pickup.

Pitfalls
  • Over-optimization hurts human readability — The content-quality agent catches robotic-sounding edits

Combinations

Pair with other MCPs for X10 leverage

agentic-seo-skill + firecrawl

Firecrawl does the JS-rendered crawl; agentic-seo interprets

Crawl with firecrawl, pipe rendered HTML into agentic-seo-skill for analysis.✓ Copied
agentic-seo-skill + github

Open PRs directly from audit findings

For the high-confidence findings, open a PR in the repo.✓ Copied
agentic-seo-skill + claude-seo-skill

Use the /seo slash commands for fast runs, agentic-seo for evidence-backed depth

First do /seo audit for the quick view, then agentic-seo-skill for the deep defensible audit.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
Technical SEO agent URL Baseline tech audit 0
Schema agent URL Structured data work 0
Performance agent URL CWV review 0
GitHub-analyst agent repo URL Audits on GitHub-hosted sites 0
Verification agent prior findings Before publishing audit 0

Cost & Limits

What this costs to run

API quota
None for the skill
Tokens per call
10-30k for full audit — evidence collection uses scripts, not tokens, for the raw data
Monetary
Free — skill is local
Tip
Utility scripts are Python — run them as preflight outside the LLM loop to save tokens.

Security

Permissions, secrets, blast radius

Credential storage: No credentials — scripts hit public URLs only
Data egress: Only to the sites you audit

Troubleshooting

Common errors and fixes

Utility scripts fail to run

Check Python version and required packages; install dependencies from the skill's requirements file.

Verify: python --version
Confidence labels all 'low'

Scripts couldn't collect enough raw evidence — check network access and JS rendering.

GitHub-analyst can't access repo

Set a PAT for private repos; public repos should work unauthenticated.

Alternatives

Agentic-SEO-Skill vs others

AlternativeWhen to use it insteadTradeoff
claude-seo-skillYou want slash-command UX and enterprise reportingClaude-only; no multi-agent evidence framework
seo-geo-claude-skillYou want a phase-based lighter libraryLess evidence-first methodology

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills