The three-layer audit is the diagnostic framework Kodiac built the platform around. Most AI visibility tools scan a single layer - usually AI output - and tell you to write more content. The three-layer audit scans output, input, and ecosystem so you can see the actual root cause of your AI visibility.
Each layer answers a different question. Each layer has its own metrics, sources, and interventions. None of them work in isolation. The three together form the only complete diagnostic for AI-mediated brand discovery.
How ChatGPT, Perplexity, Gemini, Claude and Copilot describe you right now. Visibility scored 0–100 per system. Inaccuracies flagged. Competitive share of voice tracked weekly.
Per-page AI-readiness scored across 10 dimensions. Crawlability, structured data, semantic clarity, factual density, brand entity clarity. Fix list ranked by predicted score uplift.
200+ third-party sources monitored. Per-source AI weight 0–10. Reddit, Wikipedia, G2, news. The 80-90% of brand representation that owned content can never reach.
The output layer is the surface most teams already worry about - what a buyer actually sees when they ask an AI assistant about your category. Kodiac monitors all five major AI systems continuously, scores each one on a 0–100 visibility scale, flags accuracy issues, and benchmarks share of voice against named competitors.
ChatGPT (OpenAI), Perplexity, Gemini and Google AI Overviews, Anthropic Claude, Microsoft Copilot. Single-engine monitoring is no longer sufficient because AI search has fragmented permanently.
Composite score per system, plus an aggregate across all five. Trended weekly. Industry benchmarked. Designed for board-ready reporting that holds up to scrutiny.
Verifies AI claims against your Source of Truth registry. Flags critical, moderate, and minor mismatches. Pricing inaccuracies and product claims surfaced for immediate intervention.
Drill into the actual AI response for any tracked prompt. See which sources AI cited, what claims it made, and what changed since the last audit cycle.
The input layer measures whether AI crawlers can read, understand, and cite your owned content. Kodiac scores every page across 10 dimensions of AI-readiness, identifies structured data gaps, and produces a prioritised fix list ranked by predicted score uplift, not generic recommendations.
Crawlability for AI, structured data, semantic clarity, authority signals, content freshness, factual density, AI-readability, brand entity clarity, page speed, competitor gap. Each dimension scored, weighted, and ranked for impact.
Maps every schema.org type present, missing, or invalid. Auto-generates FAQPage, HowTo, Product, and Organization markup. Validates JSON-LD before export. Ready-to-deploy structured data.
Verifies your robots.txt and meta robots tags are not accidentally blocking GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and other AI retrieval bots. The single most common cause of zero AI visibility.
Every low-scoring page maps directly to a fix in Kodiac Content. Bulk-push 20 pages to the fix list with a single action. Edit inline, write back to source CMS, watch the AI score recover.
The ecosystem layer is what the rest of the market calls Source Intelligence. University of Toronto research found AI search engines cite earned third-party sources up to 92% of the time when describing brands - and most AI visibility tools either ignore this layer entirely or treat it as a footnote. For Kodiac it is the primary differentiator.
Reddit, Wikipedia, Wikidata, G2, Hacker News, ProductHunt, industry news, trade press, partner directories, forum discussions, review sites. Continuous tracking with sentiment alerts.
Every source scored 0–10 for how heavily each AI system relies on it when describing your brand. Wikipedia stub article? Weight 9.4 in ChatGPT. Reddit pricing thread? Weight 7.8 in Perplexity.
Every recommended intervention comes with a predicted points uplift. “Address Reddit pricing thread: +11 points.” Real numbers, ranked by leverage, not generic recommendations.
Source-specific intervention guides for Wikipedia, Reddit, G2, Hacker News, ProductHunt and trade press. Specific tactics that respect each platform's culture and policies.
If you scan only the output layer, every recommendation collapses to “write more content.” If you scan only the input layer, you optimise pages that AI never reaches. If you scan only the ecosystem layer, you miss accuracy issues on your own site. The three layers are diagnostic siblings - none of them stands alone.
| If you only scan… | You see | You miss | The recommendation collapses to |
|---|---|---|---|
| Output (Layer 01) | What AI says today | Why AI says it | “Publish more content.” |
| Input (Layer 02) | Owned content gaps | That owned content is only 27% of the equation | “Add more schema markup.” |
| Ecosystem (Layer 03) | Third-party signals | That AI is misreading your own site | “Get more press coverage.” |
| All three | The full picture | Nothing | A ranked, prioritised fix list per layer |
All five AI systems. Website AI-readiness summary. Top 10 sources. Top 3 fixes. No credit card required.
Run free audit →