Uncrawlable pages get zero traffic
No crawl, no index, no rankings. The most common cause of mystery traffic drops is a robots.txt edit that blocked more than intended.
Before a page can rank, a search engine has to find it, fetch it, and understand that it exists. Crawlability is everything that happens before ranking.
Crawlability is the set of technical signals that tell search engines and AI crawlers which pages on your site they're allowed to access. If a page isn't crawlable, it doesn't matter how good the content is — Google, ChatGPT, and Perplexity will never see it, and your customers will never find it through search.
The core pieces are your robots.txt file (which crawlers are allowed where), your sitemap.xml (a map of every URL you want indexed), HTTPS and a valid SSL certificate (crawlers distrust broken TLS), canonical tags (which version of a duplicate page is authoritative), and redirect chains (how cleanly old URLs forward to new ones).
A surprising number of sites quietly block their most important pages — a misplaced Disallow line in robots.txt, a WordPress plugin flipping "discourage search engines" on, or an AI block added during a privacy review that accidentally hides the whole site from GPTBot and ClaudeBot.
No crawl, no index, no rankings. The most common cause of mystery traffic drops is a robots.txt edit that blocked more than intended.
ChatGPT, Claude, and Perplexity all respect robots.txt. If you block GPTBot or ClaudeBot, you disappear from AI answers — even if Google still indexes you.
Every extra hop in a redirect chain bleeds ranking signals and slows crawl budget. A clean 301 preserves almost everything; a chain of 4 redirects keeps very little.
An expired certificate flips your site to a browser warning and a Google search warning in the same hour. Crawlers back off, users bounce.
Our free audit runs these checks on your Crawlability signals in about 60 seconds.
Most crawlability problems are one-line fixes once you know where to look: edit robots.txt to allow the right bots, submit a sitemap in Search Console, renew the SSL cert, consolidate redirect chains to a single hop. SEOGrade's paid reports give you the exact lines to change and the CMS-specific instructions (WordPress, Webflow, Shopify, Next.js).
Explore all 9 categories: Technical SEO · On-Page SEO · Content & E-E-A-T · Authority · AI Citability · GEO · pSEO · Local SEO. Ready to grade yours? Run the free SEOGrade audit.
Crawlability is whether search engine bots can access and read the pages on your site. It's governed by robots.txt, sitemaps, HTTPS, canonical tags, and redirects. If a page isn't crawlable, it can't be indexed, and it can't rank.
The fastest way is a free audit tool like SEOGrade — it fetches your robots.txt, sitemap, and a sample of URLs in about 60 seconds and flags anything that blocks crawlers or drops signals in redirects. You can also use Google Search Console's URL inspection tool for individual pages.
It doesn't hurt your Google rankings directly, but it removes you from ChatGPT's search results and citations. In 2026, AI search is a growing share of traffic — blocking AI crawlers is the new equivalent of blocking Googlebot in 2010.
A redirect chain is when URL A redirects to B, which redirects to C, which finally redirects to D. Each hop loses a small amount of ranking signal and wastes crawl budget. Best practice: every old URL should redirect to its final destination in exactly one hop.