Start here
Before You Fix It: What This Check Means
JS dependency ratio estimates whether core meaning is accessible before heavy client-side execution. In plain terms, this tells you whether AI crawlers and answer systems can understand and reuse your content correctly. Scavo combines multiple page signals.
Why this matters in practice: unclear machine-facing signals can reduce retrieval quality and citation consistency.
How to use this result: treat this as directional evidence, not final truth. Answer-engine retrieval behavior can shift over time even when your technical setup is stable. First, confirm the issue in live output: verify bot-facing output and policy files on the final URL Then ship one controlled change: Ensure above-the-fold semantic content exists in initial HTML. Finally, re-scan the same URL to confirm the result improves.
Background sources
TL;DR: Your page content is rendered via JavaScript, making it invisible to AI crawlers that can't execute JS.
None of the major AI crawlers (GPTBot, ClaudeBot) render JavaScript — analysis of over half a billion GPTBot requests found zero evidence of JS execution (Vercel). ChatGPT fetches HTML content 57.7% of the time while Claude focuses on images at 35.2% (SearchViu). If your important content only exists after JavaScript runs, AI models can't see it, cite it, or recommend it.
What Scavo checks (plain English)
Scavo combines multiple page signals:
- Script tag count in HTML
- Inline script byte size
- First-party external script candidates
- Known external JS sizes (via HEAD
Content-Lengthprobes, up to 8 scripts) - HTML byte size
- Extractable text chars/tokens from primary content scope
It then computes an estimated JS-to-HTML ratio and contextual risk.
Important thresholds in this check:
- Fail: ratio
>= 3.0 - Fail: ratio
>= 1.6and extractable tokens< 220 - Warning: ratio
>= 1.2 - Warning: ratio
>= 0.8with tokens< 300 - Warning: ratio
>= 0.6with extractable chars< 600
Special behavior:
- Pass: no script tags detected
- Info: low-confidence estimate when script sizes cannot be measured
How Scavo scores this check
Scavo assigns one result state for this check on the tested page:
- Pass: baseline signals for this check were found.
- Warning: partial coverage or risk signals were found and should be reviewed.
- Fail: required signals were missing or risky behavior was confirmed.
- Info: Scavo could not gather enough reliable evidence on this run to score pass/fail confidently.
In your scan report, this appears under What failed / What needs attention / What is working for ai_js_dependency_ratio, followed by Recommended next steps and Technical evidence (for developers) when needed.
- Scan key:
ai_js_dependency_ratio - Category:
AI_VISIBILITY
Why fixing this matters
If essential meaning only appears after heavy client rendering, some crawlers and downstream systems can miss context, extract thin text, or index stale content snapshots.
Reducing dependency does not mean "no JavaScript." It means the key content and intent of a page should be understandable from initial HTML, with JS enhancing rather than hiding meaning.
Common reasons this check flags
- Large framework bundles hydrate most visible copy.
- Marketing pages ship heavy app shell JS before meaningful content.
- Script size headers are unavailable, reducing confidence in observed ratio.
- Thin HTML plus lots of scripts creates high dependency signal.
If you are not technical
- Ask your team one plain question: "Can a crawler understand this page if JS partially fails?"
- Prioritize fixes on revenue pages first (home, pricing, landing pages).
- Request before/after evidence: HTML text extraction and bundle weight.
- Re-scan and track ratio trend over time, not one-off snapshots.
Technical handoff message
Copy and share this with your developer.
Scavo flagged AI JS Dependency Ratio (ai_js_dependency_ratio). Please reduce JS-to-HTML dependency for core page meaning, expose key content server-side/SSR where needed, and provide before/after ratio + extractable-text evidence.If you are technical
- Ensure above-the-fold semantic content exists in initial HTML.
- SSR or pre-render key marketing/help content where practical.
- Defer non-critical scripts and split large bundles.
- Avoid gating primary copy behind late client rendering.
- Serve script assets with correct
Content-Lengthto improve observability.
How to verify
- Compare raw HTML text extraction before/after changes.
- Measure script payload and JS-to-HTML ratio again.
- Confirm key content appears without waiting for hydration.
- Re-run Scavo and ensure ratio/score improves with higher confidence.
What this scan cannot confirm
- It does not execute a full browser rendering benchmark for every bot.
- It estimates JS payload from observable headers and inline bytes.
- It does not measure user-interaction performance directly (that is separate from dependency ratio).
Owner checklist
- [ ] Assign owner for rendering strategy on key templates.
- [ ] Keep SSR/prerender decisions documented by page type.
- [ ] Add release checks for "critical copy present in initial HTML".
- [ ] Review ratio after major frontend framework or bundle changes.
FAQ
Is this saying single-page apps are bad?
No. SPAs can be fine. The risk is when critical meaning is unavailable in initial content.
Why does Scavo return low-confidence info sometimes?
Because external script sizes may be missing from response headers, making payload estimates incomplete.
Should we optimize ratio on every page equally?
Start with high-impact pages users discover first, then expand.
What is a practical target?
Lower ratio with richer extractable HTML is the directionally correct goal; use Scavo trendlines and business pages as priority.
Sources
- Google Search Central: JavaScript SEO basics
- Google Search Central: Dynamic rendering as a workaround (context)
- web.dev: Reduce JavaScript payloads with code splitting
- OpenAI: Search crawler and GPTBot documentation
Need a page-by-page SSR/HTML-first prioritization matrix? Send support your top templates and current rendering mode.