Technical help for sitewide files, protocol behavior, DNS, and platform hygiene.

Surfacing hidden infrastructure and markup problems before they turn into visible outages or crawl issues.

mobile_viewport_width

Content Overflows Viewport on Mobile — Horizontal Scroll

Horizontal scrolling on mobile is one of the strongest negative UX signals. Users expect vertical-only scrolling, and content that overflows the viewport looks broken. Common causes: fixed-width elements, unresponsive images, tables without scroll wrappers, or absolute-positioned elements that extend beyond the screen edge.

Open guide
nameserver_change

DNS Nameserver Changed — Possible Unauthorized Modification

Nameserver changes control where your domain's traffic is routed. An unauthorized change means someone can intercept all your traffic, email, and subdomains. If you didn't initiate this change, investigate immediately — check your domain registrar account for unauthorized access. Legitimate causes include hosting migrations or CDN changes.

Open guide
robots_txt

robots.txt Missing, Invalid, or Misconfigured

A missing robots.txt means search engines crawl everything — including admin pages, staging content, and duplicate URL parameters. A misconfigured one can accidentally block your entire site from indexing. Nearly 38% of indexed websites now include AI-specific restrictions in robots.txt (EngageCoders, 2024), making it also your primary control point for AI crawler access.

Open guide
1 2

About technical

16 guides 16 active checks 4 sources

Technical hygiene is where silent failures live. Small protocol and infrastructure mistakes can confuse browsers, bots, and operators long before users know why.

This category covers the site-level signals that often sit between product, infrastructure, and marketing ownership. They are not always glamorous, but when they drift you get confusing symptoms: crawlers go quiet, analytics gets less trustworthy, missing pages start looking indexable, or email deliverability drops without a clear application bug to blame.

The value of technical checks is that they expose foundational behavior early. A clean robots file, correct status codes, stable DNS, declared language and charset, trustworthy mail records, and predictable machine-readable resources reduce ambiguity for the systems that interpret your site every day.

Why it matters

These issues often hide in the gaps between teams. No single feature owner may notice them until they have already affected search, deliverability, or operations.

Low-level correctness makes every other category easier to reason about because bots, browsers, and tooling receive fewer contradictory signals.

Common pitfalls

Treating infrastructure files as one-time setup instead of versioned product assets that need review after structural changes.

Assuming the application is healthy because the UI looks fine while status codes, bot guidance, or mail DNS are drifting underneath.

What's covered

Sitewide crawler files such as robots.txt and llms.txt, plus the response behavior around not-found pages and machine-readable routes.

Document-level hygiene such as doctype, charset, language declaration, and favicon discovery.

Where to start

Start with sitewide files and response semantics because those signals affect every page and every crawler immediately.

Then review domain, DNS, and email-trust ownership so infrastructure changes do not happen without a clear audit trail.