Start here
Before You Fix It: What This Check Means
Meta robots and X-Robots-Tag directives control indexing/preview behavior on a per-resource basis. In plain terms, this checks whether page-level and header-level robots directives are quietly telling search engines to keep this URL out of results. Scavo reads both of these surfaces on the same URL.
Why this matters in practice: incorrect signals here can dilute indexing clarity and search traffic quality.
How to use this result: treat this as directional evidence, not final truth. Search indexing outcomes depend on crawler recrawl cadence and ranking systems outside your direct control. First, confirm the issue in live output: inspect both the live HTML head and the live response headers for robots directives on the same URL Then ship one controlled change: Inspect the live HTML source for `<meta name="robots">`. Finally, re-scan the same URL to confirm the result improves.
TL;DR: This check looks at the two places a page can tell search engines what to do: the HTML robots meta tag and the HTTP X-Robots-Tag header. If either one effectively says noindex, the page can be kept out of search results even if the rest of your SEO looks fine.
This matters because teams often look only at page source and miss a header being added by the server, CDN, or edge platform. Google supports both surfaces, and when rules conflict it applies the more restrictive one. That means a single hidden noindex in a response header can quietly override what the template appears to say.
What Scavo checks (plain English)
Scavo reads both of these surfaces on the same URL:
<meta name="robots" content="...">in the page head- any
X-Robots-Tagheaders returned with the response
Scavo then combines the observed directives and checks whether the page is effectively being told to stay out of the index.
How Scavo scores this check:
- Pass: no robots restrictions were found, or robots directives were found but they do not include
noindex - Warning: the effective directives include
noindex - Pass: robots directives exist but do not currently block indexing
Important nuance:
- Scavo records both
noindexandnofollowin the details - this check currently warns on effective
noindex - it does not fail or warn on
nofollowalone
How Scavo scores this check
Scavo assigns one result state for this check on the tested page:
- Pass: baseline signals for this check were found.
- Warning: partial coverage or risk signals were found and should be reviewed.
- Fail: required signals were missing or risky behavior was confirmed.
- Info: Scavo could not gather enough reliable evidence on this run to score pass/fail confidently.
In your scan report, this appears under What failed / What needs attention / What is working for meta_robots, followed by Recommended next steps and Technical evidence (for developers) when needed.
- Scan key:
meta_robots - Category:
SEO
Why fixing this matters
A page can have perfect titles, content, links, and structured data, but if it is effectively marked noindex, it still will not be eligible to appear in normal search results. That makes this one of the highest-leverage SEO checks in the whole system.
The practical danger is hidden drift. A developer may remove noindex from the HTML template and assume the problem is solved, while a CDN rule or framework middleware is still adding X-Robots-Tag: noindex in the response.
Common reasons this check warns
- A staging or preview
noindexrule leaked into production. - A plugin, middleware, or CDN rule is adding
X-Robots-Tag: noindex. - A page-level CMS override was set for testing and never removed.
- Launch checklists verified the HTML but not the response headers.
If you are not technical
- Ask whether this page is supposed to be indexable.
- Ask your developer or SEO owner for proof from both the HTML source and the live response headers.
- If the page should rank, ask them to remove the accidental
noindexfrom whichever layer is setting it. - Re-run Scavo on the same URL and confirm the warning clears.
Technical handoff message
Copy and share this with your developer.
Scavo flagged Meta Robots (meta_robots) because the effective robots directives for this URL include noindex. Please inspect both the HTML robots meta tag and any X-Robots-Tag response headers, remove accidental exclusions where appropriate, and share before/after evidence from the same URL.If you are technical
- Inspect the live HTML source for
<meta name="robots">. - Inspect the live response headers for
X-Robots-Tag. - Decide the intended outcome for this URL: indexable or intentionally excluded.
- Remove accidental
noindexfrom the layer that truly owns it: template, middleware, CDN, or server config. - If the page is intentionally excluded, document that decision so it is not “fixed” later by mistake.
- Re-check representative templates, not just the single URL, if the rule is shared.
How to verify
- View live source and inspect the robots meta tag.
- Run
curl -I https://your-urland inspect anyX-Robots-Tagheaders. - Confirm the effective directives no longer include
noindexon pages meant to rank. - Use Search Console URL Inspection for extra confirmation after deployment.
- Re-run Scavo and confirm
meta_robotsreturns to pass where expected.
What this scan cannot confirm
- It does not guarantee ranking or indexing speed; it checks directive state, not Google’s recrawl timing.
- It does not decide whether
noindexis strategically correct for your business. - It records
nofollow, but it does not currently fail a page onnofollowalone.
Owner checklist
- [ ] Assign one owner for robots directives across templates and edge layers.
- [ ] Document which page groups are intentionally excluded from search.
- [ ] Add release checks for both HTML source and response headers.
- [ ] Revalidate after SEO plugin, CDN, or middleware changes.
FAQ
Why can this warn even if the page source looks fine?
Because X-Robots-Tag can be added in the response headers, and that is not always obvious from the visible HTML alone.
Does Google really support both meta robots and X-Robots-Tag?
Yes. Google documents both, and when there is a conflict it applies the more restrictive rule.
Is missing a robots tag a problem?
Not by itself. If no robots restrictions are present, default indexing behaviour is generally allowed.
What about nofollow?
Scavo records it in the details, but this check is mainly focused on whether the URL is effectively blocked from indexing via noindex.
Sources
- Google Search Central: Robots meta tag and X-Robots-Tag
- Google Search Central: Consolidate duplicate URLs
- MDN:
<meta>element reference
Need help tracing where noindex is really being set? Send support one affected URL and whether you suspect template, middleware, or CDN ownership.