All posts
AI Tools 14 min read May 5, 2026

geo-seo-claude: Optimize Your Website for AI Search with Claude Code

A complete guide to geo-seo-claude by Zubair Trabzada — the GEO-first SEO skill for Claude Code. Covers AI citability scoring, crawler access, llms.txt generation, brand authority, schema markup, and PDF reporting.

#GEO SEO#Claude Code#AI Search#SEO#Citability#llms.txt#Schema Markup#E-E-A-T#Open Source
Neel Shah Tech Lead · Senior Data Engineer · Ottawa

Traditional SEO is about ranking in Google. Generative Engine Optimization — GEO — is about getting cited by ChatGPT, Claude, Perplexity, and Gemini. These are different problems, and most websites are not built for both.

geo-seo-claude by Zubair Trabzada is an open-source skill plugin for Claude Code that runs a full GEO + SEO audit on any website using 13 specialized sub-skills and 5 parallel analysis agents. You invoke it from the command line, it fetches and analyzes your site, and it outputs actionable scores with a professional PDF report you can hand directly to a client.

This guide covers what the tool does, how to install it, and what each sub-skill actually checks.


Why GEO is not optional anymore

The numbers that justify this entire category of tooling:

MetricValue
AI-referred traffic growth (YoY)+527%
Conversion rate vs organic search4.4× higher
Projected drop in traditional search traffic by 202850% (Gartner)

AI search engines do not crawl and rank pages the way Google does. They retrieve information from their training data and web browsing capabilities, then synthesize answers. To appear in those answers, your content needs to be citable — structured, factual, self-contained, and accessible to AI crawlers. Most websites fail on at least three of those criteria.

geo-seo-claude diagnoses exactly where you fail and tells you how to fix it.


What the tool is

geo-seo-claude is a Claude Code skill plugin — a collection of markdown files that extend Claude Code’s slash commands. Once installed, you run /geo audit and the tool orchestrates 13 sub-skills and 5 parallel agents to analyze a target website end-to-end.

The output is a composite GEO score, section-by-section breakdowns, and either a markdown report or a full PDF with charts, score gauges, and a prioritized action plan.

Repository at a glance

geo-seo-claude/
├── skills/          # 13 specialized sub-skill markdown files
├── agents/          # 5 parallel analysis agents
├── examples/        # Sample report outputs
├── install.sh       # macOS/Linux one-command installer
└── install-win.sh   # Windows (Git Bash) installer

License: MIT
Requires: Python 3.8+, Claude Code CLI, Git


Installation

macOS / Linux

curl -fsSL https://raw.githubusercontent.com/zubair-trabzada/geo-seo-claude/main/install.sh | bash

Windows (Git Bash)

curl -fsSL https://raw.githubusercontent.com/zubair-trabzada/geo-seo-claude/main/install-win.sh | bash

Manual install

git clone https://github.com/zubair-trabzada/geo-seo-claude.git
cd geo-seo-claude
bash install.sh

The installer copies the skill and agent files into ~/.claude/skills/geo/ and sets up a dedicated Python virtual environment at ~/.claude/skills/geo/.venv/. Your system Python is never touched — the tool references the venv directly.

After installation, restart Claude Code and the /geo commands are available in any project.


The 12 primary commands

CommandWhat it does
/geo auditFull GEO + SEO audit with all parallel agents
/geo citabilityScores content blocks for AI citation readiness
/geo crawlersChecks robots.txt for AI crawler access
/geo llmstxtAnalyzes and generates an llms.txt file
/geo brandsScans brand presence across AI-cited platforms
/geo platformPlatform-specific optimization (ChatGPT, Perplexity, etc.)
/geo schemaStructured data / JSON-LD analysis
/geo technicalCore Web Vitals and technical SEO foundations
/geo contentContent quality and E-E-A-T scoring
/geo reportGenerates a client-ready markdown report
/geo report-pdfGenerates a professional PDF with charts
/geo compareCompares two URLs or two audit snapshots

Each command can run standalone or as part of the full /geo audit orchestration. Running audit fires all 13 sub-skills in parallel using Claude’s subagent system, so the full analysis completes in roughly the same time as a single sub-skill.


Deep dive: the 13 sub-skills

1. geo-audit — the orchestrator

The master sub-skill. It spawns 5 parallel analysis agents — covering AI visibility, platform optimization, technical SEO, content quality, and schema — then aggregates their scores into a composite GEO score out of 100. This is what /geo audit calls.

2. geo-citability — can AI models quote you?

This is the most important sub-skill for pure GEO visibility. It analyzes your content blocks against the specific patterns that AI models tend to cite.

Research from the tool’s documentation shows that optimal AI-cited passages share these characteristics:

  • 134–167 words — long enough to be substantive, short enough to be reproduced
  • Self-contained — the passage makes sense without surrounding context
  • Fact-rich — includes statistics, named entities, dates, or verifiable claims
  • Question-answering — directly responds to a likely user query

geo-citability scores each major content block on these dimensions and flags which sections need restructuring.

3. geo-crawlers — are AI bots allowed in?

Many websites block AI crawlers without realizing it, either through overly broad robots.txt rules or by blocking user-agents that were not AI crawlers when the rules were written.

The sub-skill checks robots.txt for 14+ AI crawler user-agents including:

GPTBot          (OpenAI)
ClaudeBot       (Anthropic)
PerplexityBot   (Perplexity)
Google-Extended (Google AI Overview)
Applebot        (Apple)
CCBot           (Common Crawl)

It outputs per-crawler allow/deny status and generates the corrected robots.txt rules you need to add.

4. geo-llmstxt — the emerging AI site map standard

llms.txt is a new voluntary standard (analogous to robots.txt but for guiding AI systems rather than blocking them). An llms.txt file at your domain root tells AI crawlers what your site is about, which pages are authoritative, and how content is organized.

geo-llmstxt checks whether your site has this file, evaluates its quality if it exists, and generates a correct version for your domain structure.

5. geo-brand-mentions — authority where AI looks

AI language models develop knowledge of brands primarily through the platforms they are trained on. A brand with high mentions on YouTube, Reddit, Wikipedia, LinkedIn, and Quora is far more likely to be recommended by an AI than a brand that exists only on its own domain.

geo-brand-mentions reports that brand presence on AI-cited platforms correlates 3× more strongly with AI visibility than traditional backlinks. The sub-skill scans:

  • YouTube (video content and channel authority)
  • Reddit (community discussions and recommendations)
  • Wikipedia (encyclopedia presence)
  • LinkedIn (professional authority)
  • Quora (Q&A citations)
  • 7+ additional platforms

Output: per-platform presence score and a prioritized list of platforms to target.

6. geo-platform-optimizer — different AI engines, different requirements

ChatGPT, Perplexity, Google AI Overviews, and Claude each retrieve and surface information differently. What works well for one may not translate to another.

geo-platform-optimizer generates platform-specific recommendations for each major AI search engine, covering content format preferences, authority signals each platform weights most heavily, and technical requirements unique to that engine.

7. geo-schema — structured data for AI discovery

JSON-LD schema markup is one of the clearest signals a site can send to both traditional search engines and AI systems. The sub-skill audits existing schema implementation against current standards, checks for required fields that are missing, and generates corrected markup for:

  • Organization
  • Article / BlogPosting
  • BreadcrumbList
  • FAQPage
  • HowTo
  • Product

8. geo-technical — the SEO foundations

GEO does not replace technical SEO — it builds on top of it. geo-technical checks Core Web Vitals, crawlability, mobile usability, canonical tags, HTTPS, and page speed. A site that fails on fundamentals will score poorly on AI visibility regardless of content quality.

9. geo-content — E-E-A-T and content quality

Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) is the framework Google uses to evaluate content quality, and AI models have internalized similar criteria through their training. geo-content evaluates your content against these signals: author bylines and credentials, primary source citations, factual accuracy signals, content freshness, and depth versus thin content.

10. geo-report — client-ready markdown

Aggregates all sub-skill outputs into a structured markdown report organized by score, priority, and implementation effort. Useful for internal documentation or developer handoff.

11. geo-report-pdf — professional deliverables

Generates a PDF version of the report with visual components:

  • Score gauge (0–100) with color bands
  • Bar charts per sub-skill category
  • Platform readiness visualization
  • Color-coded priority tables
  • Prioritized action plan with effort estimates

Designed to be handed directly to a client or uploaded to a project management system.

12. geo-prospect — business development

Analyzes a prospective client’s website and generates a gap report formatted as a sales document. Useful for agencies pitching GEO services.

13. geo-compare — before/after tracking

Compares two audit snapshots or two URLs side by side. Useful for tracking improvement over time or benchmarking against a competitor.


What an audit output looks like

A typical /geo audit run against a mid-size blog produces output structured like this:

GEO Score: 61/100

AI Visibility:       58/100  ⚠
Citability:          55/100  ⚠  — 3 of 12 content blocks are citation-ready
Crawlers:            90/100  ✓  — ClaudeBot and PerplexityBot are blocked
llms.txt:             0/100  ✗  — No llms.txt found
Brand Mentions:      45/100  ⚠  — No YouTube or Reddit presence detected
Platform-Specific:   62/100  ⚠

Technical SEO:       78/100  ✓
Schema Markup:       40/100  ⚠  — Missing Article schema on 8 posts
Content (E-E-A-T):  70/100  ✓

Top 3 priorities:
1. Unblock ClaudeBot and PerplexityBot in robots.txt  [15 min]
2. Create llms.txt  [30 min]
3. Restructure 9 content blocks for citability  [2–4 hours]

The PDF version renders this as a visual dashboard with implementation effort estimates for each recommendation.


Who this is for

Agencies: The tool is explicitly positioned for GEO consulting engagements at $2K–$12K/month. The PDF report sub-skill produces client deliverables without additional formatting work.

Marketing teams: Run audits on your own domain quarterly to track AI visibility as AI search share grows.

Content creators and bloggers: The citability sub-skill alone is worth the installation time — most content is structured for human readers, not AI citation, and the gap is wider than most people expect.

Developers building SEO tooling: The modular sub-skill architecture is clean and extensible under MIT license. Each sub-skill is a standalone markdown file with a well-defined interface.


The broader context: GEO vs SEO

It is worth being clear about what this tool is and what it is not.

Traditional SEO optimizes for ranking algorithms — PageRank derivatives, keyword matching, backlink graphs. The signals are well-understood and the tooling is mature.

GEO optimizes for a different output: citation in generative answers. The signals are less settled, but the early research (including the work reflected in this tool) points toward a consistent set of factors: citability structure, AI crawler access, platform authority, and schema-assisted discovery.

The two disciplines overlap significantly. A technically sound website with high E-E-A-T content and correct schema markup will perform reasonably well on both. But the specifics differ enough that dedicated GEO tooling — like geo-seo-claude — is genuinely useful rather than redundant.


Getting started

The repository is at github.com/zubair-trabzada/geo-seo-claude. Install takes under five minutes on any system with Python 3.8+ and the Claude Code CLI.

# Install
curl -fsSL https://raw.githubusercontent.com/zubair-trabzada/geo-seo-claude/main/install.sh | bash

# Run your first audit
/geo audit https://yourwebsite.com

The full audit on a typical website completes in 2–4 minutes. The PDF report is saved to your current working directory.

Frequently asked questions

What is geo-seo-claude?

geo-seo-claude is an open-source Claude Code skill plugin that audits websites for generative engine optimization, technical SEO, AI crawler access, structured data, citability, and platform readiness.

How is GEO different from traditional SEO?

Traditional SEO focuses on search ranking, while GEO focuses on making content accessible, structured, factual, and citable enough to appear in AI-generated answers from systems such as ChatGPT, Claude, Perplexity, and Gemini.

Which technical signals matter most for AI search visibility?

Important technical signals include AI crawler access, llms.txt, canonical URLs, structured JSON-LD schema, sitemap coverage, crawlable server-rendered pages, and clearly extractable long-form content.