google search consoleindex coveragecoverage reportnot indexedcrawl errorsseoindexing

Google Search Console Index Coverage Errors: How to Fix Them (2026 Guide)

Learn what every Google Search Console Coverage Report error means and exactly how to fix it. From 'Crawled - currently not indexed' to 404s — complete troubleshooting guide.

Search Console Tools Team11 min read
Table of Contents

Google Search Console Index Coverage Errors: How to Fix Them (2026 Guide)

The Coverage report is where site owners panic. You log into Google Search Console, see a wall of red "Error" or yellow "Excluded" pages, and wonder why Google isn't indexing your content.

Good news: most Coverage errors are fixable. This guide walks through every error type you'll encounter in the GSC Coverage report, what each one means, and the specific steps to resolve it.


Where to Find the Coverage Report

  1. Log into Google Search Console
  2. Select your property
  3. In the left sidebar, click Pages (formerly called "Coverage")

You'll see four status buckets:

| Status | What It Means | |--------|---------------| | Error | Pages Google tried to index but couldn't | | Valid with warnings | Indexed but with issues | | Valid | Successfully indexed — you want pages here | | Excluded | Not indexed, usually intentionally |

The goal: Move your important pages into the "Valid" bucket and understand why excluded pages aren't there.


Understanding Each Coverage Error

🔴 ERRORS (Pages that failed to index)


Server Error (5xx)

What it means: When Google tried to crawl this URL, your server returned a 5xx error (500, 503, etc.). Google couldn't access the page at all.

How to fix it:

  1. Visit the URL yourself — is it loading? If not, it's a live server issue
  2. Check your server error logs for the time period GSC flagged
  3. Common causes: crashed app server, memory limits, bad deploys, database timeouts
  4. If it's a transient error (temporary server overload), request reindexing once the server is stable

If it's intermittent: Google may have hit your server during a brief outage. Request validation in GSC and monitor — if your server is stable now, it should resolve.


Redirect Error

What it means: The URL has a redirect chain that's broken, creates a loop, or has too many hops.

How to fix it:

  1. Use a redirect checker tool to trace the full redirect chain
  2. Look for: chains longer than 3-4 redirects, circular redirects (A → B → A), redirects pointing to non-existent pages
  3. Fix: update to direct 301 redirects from old URL → final destination URL, skipping all intermediate hops

Best practice: Never chain redirects. Old URL → Final URL directly. Every extra hop costs crawl budget and link equity.


URL Blocked by robots.txt

What it means: Your robots.txt file has a Disallow rule that's blocking Googlebot from crawling this URL.

How to fix it:

  1. Go to GSC → Settings → robots.txt Tester (or check your robots.txt directly at yourdomain.com/robots.txt)
  2. Find the Disallow rule blocking the URL
  3. Remove the rule if the page should be indexed
  4. If you intentionally blocked it, add <meta name="robots" content="noindex"> instead (robots.txt = can't crawl; noindex = can crawl but don't index)

Common mistake: Blocking / in robots.txt accidentally blocks your entire site during development — and forgetting to remove it before launch.


URL Marked 'noindex'

What it means: The page has a <meta name="robots" content="noindex"> tag or an X-Robots-Tag: noindex HTTP header. Google respects this and won't index it.

How to fix it:

  1. View the page source and search for noindex
  2. Also check your HTTP headers with a tool like httpstatus.io
  3. Remove the noindex tag on any page you want indexed
  4. Common culprit: staging/dev tags left in production, WordPress SEO plugins misconfigured for specific post types

Not Found (404)

What it means: The URL returns a 404 — the page doesn't exist.

Two scenarios:

  • Intentional 404 (page was deleted or never existed): If this URL had backlinks or traffic, redirect it to the most relevant live page. If not, ignore it.
  • Accidental 404 (page should exist): Something broke the URL. Check for slug changes, file deletions, server misconfigurations.

The redirect test: Before deleting any page that ever had traffic or backlinks, check GSC for the URL. If it shows impressions or was linked to externally, redirect it first.


Soft 404

What it means: The URL returns a 200 status code (appears to work) but the page content signals "not found" — empty results pages, "no posts found" pages, very thin content pages that Google classifies as effectively empty.

How to fix it:

  1. Return a proper 404 or 301 redirect for truly empty pages
  2. For archive pages (categories, tags, search results) with little unique content: either add content or add noindex
  3. For thin pages: add more content OR redirect to a richer parent page

Why Google hates soft 404s: They waste crawl budget on pages with no value and dilute your site's quality signals.


Unauthorized Request (401)

What it means: The page requires login/authentication. Googlebot can't access it.

How to fix it:

  • If the page should be public: remove the authentication requirement for that URL
  • If it's truly behind a login wall (private content): add it to your robots.txt Disallow list so Google stops wasting crawl budget on it

🟡 EXCLUDED (Not indexed, usually by design)

These aren't always problems — but some are.


Crawled — Currently Not Indexed ⚠️

What it means: Google successfully crawled the page and chose NOT to index it. This is different from a technical error — Google made a judgment call that the page isn't worth indexing.

Common causes:

  • Thin content: Page is too short or low-value
  • Duplicate content: Content is very similar to another indexed page
  • Low-quality signals: Thin affiliate pages, boilerplate content, auto-generated pages
  • New pages: Sometimes Google just needs more time (especially for new domains)

How to fix it:

  1. Improve content depth — add unique information, expert insights, original data
  2. Add internal links pointing to the page (signals importance to Google)
  3. Check for duplicate content using a site: search: site:yourdomain.com "unique phrase from page" — if multiple pages return, you have duplication
  4. Get external links pointing to the page (backlinks signal authority)
  5. Wait — for new domains, this can take weeks even with good content

This is one of the most common and frustrating GSC errors. The fix is almost always "make the content better."


Discovered — Currently Not Indexed ⚠️

What it means: Google found the URL (from your sitemap or internal links) but hasn't crawled it yet. It's in the queue.

Causes:

  • Low crawl budget (common on large sites or low-authority domains)
  • Page is new and Google hasn't gotten to it yet
  • Googlebot found the URL but deprioritized it based on site authority

How to fix it:

  1. Request indexing via the URL Inspection tool in GSC (works for individual pages)
  2. Build internal links to the page from already-indexed pages
  3. Submit/update your sitemap in GSC
  4. Get external backlinks — they signal to Google the page is worth crawling
  5. Improve overall site authority (backlinks to your homepage/domain)

For new sites: This is normal for the first 1-3 months. Keep publishing quality content and building links.


Duplicate, Google Chose Different Canonical

What it means: Google found multiple versions of a page (e.g., http vs https, www vs non-www, URL parameters) and chose one as the canonical — but it's not the one you specified in your <link rel="canonical"> tag.

How to fix it:

  1. Ensure consistent internal linking — all your internal links should point to the canonical version of each URL
  2. Set up proper 301 redirects from non-canonical variants (e.g., HTTP → HTTPS, non-www → www)
  3. Verify your canonical tags are correct and consistent
  4. If Google keeps overriding your canonical: it suspects you're wrong. Look at whether the "other" version really does have different content or stronger signals.

Alternate Page with Proper Canonical Tag

What it means: This page has a canonical pointing to a different URL, and Google is respecting it. The canonical target URL should be indexed instead.

Is this a problem? Usually not — it means your canonical tags are working. Only investigate if:

  • The target canonical URL isn't indexed either
  • You didn't intentionally set up canonicalization this way

Excluded by 'noindex' Tag

This page intentionally has a noindex tag. Not a problem unless you added it by mistake.


Page with Redirect

URL redirects to another URL. The destination URL should be indexed, not this one. Expected behavior.


How to Prioritize Which Errors to Fix First

With limited time, tackle errors in this order:

  1. Server errors (5xx) — Most urgent. These mean pages are completely broken.
  2. Soft 404s on important pages — Money pages and key landing pages returning soft 404 is critical
  3. Crawled — Currently Not Indexed on key pages — High-priority content not being indexed is a direct traffic loss
  4. Accidentally blocked by robots.txt or noindex — Easy fix, high impact
  5. Discovered — Currently Not Indexed on new pages — Often resolves with time + internal links
  6. Duplicate content issues — Longer fix, but worth addressing for site-wide quality
  7. 404s on pages with no backlinks or traffic — Lowest priority, can ignore

The URL Inspection Tool: Your Diagnostic Partner

For any specific URL showing errors, use the URL Inspection tool (magnifying glass icon at the top of GSC):

  1. Enter the URL
  2. Click "Test Live URL" to see what Google sees right now
  3. Check: Is it crawlable? Is the canonical correct? What HTTP status is returned? Is there a noindex tag?

The URL Inspection tool shows you exactly what Googlebot sees — not what your browser sees. This distinction matters for JavaScript-rendered sites where content may not be visible to crawlers.


Frequently Asked Questions

How long does it take for Coverage errors to be resolved after I fix them?

After fixing the underlying issue and requesting validation (or reindexing via URL Inspection), most simple errors resolve within 1-2 weeks as Googlebot recrawls affected pages. Server errors and noindex removals tend to clear faster; duplicate content issues can take longer.

Should I be worried about pages in the "Excluded" bucket?

Not necessarily. Many excluded pages are intentionally excluded (pagination pages, filtered views, thank-you pages). Focus on excluded pages that should be indexed — primarily your main content pages, product pages, and blog posts.

Why does my page say "Crawled - Currently Not Indexed" even though it has good content?

This often happens on new domains that haven't built authority yet. Google is conservative about indexing from low-authority sites. Accelerate the process by building internal links to the page, getting even one quality external backlink, and making sure the content is genuinely useful and comprehensive.

What's the difference between robots.txt blocking and noindex?

robots.txt blocks Googlebot from crawling the URL — it never sees the content. noindex allows crawling but instructs Google not to add the page to its index. Use noindex for pages you want crawled but not indexed (e.g., login pages). Use robots.txt for pages you don't want crawled at all (e.g., admin panels).

My sitemap has 500 pages but only 200 are indexed. Is that normal?

Depends on your site's age and authority. For a new site, indexing 40% is reasonable in the first 3-6 months. As you build authority and quality, Google will index more. Focus on ensuring your most important 50-100 pages are indexed first — these should have the most internal links and the best content.

How do I request Google to re-index a page after fixing a Coverage error?

In GSC: open the URL Inspection tool, enter the URL, click "Request Indexing." This pushes the URL to the front of Google's crawl queue. Note: GSC limits you to ~10 manual requests per day, so prioritize your most important pages.


Coverage errors fixed? Next step: make sure your structured data is valid so indexed pages can earn rich snippets. → Google Search Console Rich Results: How to Fix Structured Data Errors

Put These Tips Into Action

Connect your Google Search Console and let our AI find your biggest opportunities.

Get Started Free