Start here
Before You Fix It: What This Check Means
Access parity checks whether different AI bots see materially different outcomes on the same URL. In plain terms, this tells you whether AI crawlers and answer systems can understand and reuse your content correctly. Scavo probes the same URL with six user agents.
Why this matters in practice: unclear machine-facing signals can reduce retrieval quality and citation consistency.
How to use this result: treat this as directional evidence, not final truth. Bot access outcomes can vary by edge controls, geo policies, and temporary WAF behavior. First, confirm the issue in live output: verify bot-facing output and policy files on the final URL Then ship one controlled change: Export current bot outcomes (status, final URL, robots policy, canonical). Finally, re-scan the same URL to confirm the result improves.
Background sources
TL;DR: Your robots.txt blocks AI bots while allowing Google, preventing your content from appearing in AI-powered products.
ClaudeBot saw the highest growth in block rates — increasing 32.67% year-over-year (EngageCoders, 2024). If you block AI crawlers while allowing Googlebot, you're letting Google use your content in its AI products (Gemini, AI Overviews) while excluding others. Consider whether this asymmetry aligns with your content strategy, or whether parity across all bots better serves your interests.
What Scavo checks (plain English)
Scavo probes the same URL with six user agents:
- Googlebot
- OAI-SearchBot
- ChatGPT-User
- GPTBot
- Claude-SearchBot
- PerplexityBot
For each probe, Scavo compares key fields against a baseline bot (Googlebot when available):
- HTTP status bucket (
2xx,3xx,4xx,5xx,error) - Canonical URL
- Effective robots directive tokens (header + meta)
- Extracted text signature
robots.txtroot policy state (allowed,blocked,mixed)
How Scavo scores this check
Result behavior:
- Warning: only one usable bot response (insufficient parity sample)
- Warning: parity differences found
- Fail: 2+ critical drifts (for example baseline accessible while others return 4xx/5xx/error, or allowed vs blocked robots policy)
- Info: no usable probes, or all probes failed
- Pass: signals aligned across tested bots
In your scan report, this appears under What failed / What needs attention / What is working for ai_bot_access_parity, followed by Recommended next steps and Technical evidence (for developers) when needed.
- Scan key:
ai_bot_access_parity - Category:
AI_VISIBILITY
Why fixing this matters
Parity drift creates inconsistent downstream outcomes: one ecosystem can summarize/cite while another cannot. For go-to-market teams this feels like "random visibility" when the root cause is often edge configuration, bot management, or redirect logic.
It is also a reliability signal. If bot treatment is not intentional and documented, production changes can silently break discovery for specific assistants.
Common reasons this check flags
- WAF/CDN bot rules challenge or block selected user agents.
- Geo rules or anti-bot middleware vary by user agent.
- Robots policy is explicit for some bots, inherited/blocked for others.
- One bot is served a different canonical or content variant.
If you are not technical
- Ask engineering for a per-bot status matrix (who got 2xx, who got blocked/error).
- Confirm which bots you intentionally allow or disallow.
- Require documented policy instead of ad-hoc exceptions.
- Re-scan after rule changes and compare drift count.
Technical handoff message
Copy and share this with your developer.
Scavo flagged AI Bot Access Parity (ai_bot_access_parity). Please compare bot-specific status/canonical/robots/content outputs, remove unintentional access drift in WAF/CDN/robots policy, and share before/after parity evidence for all tested bots.If you are technical
- Export current bot outcomes (status, final URL, robots policy, canonical).
- Standardize allow/challenge behavior for intended bots at edge and origin.
- Align redirect/canonical logic so all intended bots resolve to same canonical target.
- Keep robots directives consistent across header/meta by bot response.
- Add bot-parity smoke tests for critical landing pages.
How to verify
- Run UA-specific probes for all six bots and capture status + final URL.
- Compare canonical, robots tokens, and response text signatures.
- Confirm intended policy differences are documented, not accidental.
- Re-run Scavo and confirm drift/critical drift counts decrease.
What this scan cannot confirm
- It does not prove downstream ranking/citation outcomes for any vendor.
- It cannot detect private vendor crawl scheduling decisions.
- It compares observable signals, not model internals.
Owner checklist
- [ ] Assign owner for bot policy at CDN/WAF + origin.
- [ ] Keep explicit allow/block decisions version controlled.
- [ ] Add monthly parity review for key AI/search bots.
- [ ] Include parity checks in incident runbooks for sudden visibility drops.
FAQ
Why is Googlebot used as baseline when available?
It is typically the most stable public crawl baseline, so drift against it is easier to diagnose.
Is blocking some bots always wrong?
No. Intentional policy is valid. The risk is undocumented or accidental drift.
Why can this be warning even with mismatches?
Scavo reserves fail for higher-severity drift patterns (multiple critical mismatches). Smaller mismatches still deserve cleanup.
What is the fastest fix path?
Start at edge controls (WAF/CDN) first, then robots policy, then template-level canonical/metadata alignment.
Sources
- OpenAI: Search crawler and GPTBot documentation
- Anthropic: Web crawler guidance
- Perplexity: Crawler and robots policy docs
- RFC 9309: Robots Exclusion Protocol
Need a bot-parity runbook template for your WAF/CDN stack? Send support your current bot rules and edge provider.