google search consoleurl inspectionindexingcrawlingseo

Google Search Console URL Inspection Tool: How to Submit URLs & Fix Indexing Issues

Learn how to use Google Search Console's URL Inspection Tool to check indexing status, submit URLs to Google, diagnose crawl issues, and force Google to crawl your website.

Search Console Tools Team17 min read
Table of Contents

When a new page isn't showing up in Google, or a recently updated post isn't reflecting your changes, the URL Inspection Tool in Google Search Console is your first stop. It tells you exactly how Google sees any URL on your site — whether it's indexed, when it was last crawled, what canonical URL Google chose, and why a page might be excluded from search results.

This guide covers everything: how to use the tool, how to submit URLs to Google for indexing, how to diagnose and fix common indexing problems, and how to use it as a real-time Google index checker.


What Is the URL Inspection Tool?

The URL Inspection Tool (formerly called "Fetch as Google" or "Fetch and Render") shows you Google's indexed version of any URL in your property. When you inspect a URL, GSC returns:

  • Index status — Is the page indexed? Is it the canonical version Google chose?
  • Last crawl date — When did Googlebot last visit this page?
  • Crawled page screenshot — What did Google actually see when it rendered the page?
  • Discovered canonicals — What canonical URL did you declare vs. what did Google select?
  • Structured data — Is your schema valid? Any errors?
  • Mobile usability — Is the page mobile-friendly per Google's assessment?
  • HTTP response — What status code did Google receive?

This is different from simply checking if a URL appears in search results — it shows you Google's authoritative internal record for that URL.


How to Access the URL Inspection Tool

  1. Open Google Search Console
  2. Select your property (make sure you're in the right property)
  3. Click URL Inspection in the left sidebar
  4. Type or paste any URL from your site into the inspection bar at the top
  5. Press Enter or click the search icon

You can also inspect any URL from within GSC reports by clicking on a URL row — an Inspect URL icon appears in the top-right corner of most detail panels.

Tip: The URL must belong to your verified property. If you inspect a URL from a different domain or subdomain, GSC will prompt you to switch properties.


How to Submit a URL to Google for Indexing

Submitting a URL to Google tells Googlebot to crawl and re-index that specific page. This is useful for:

  • New pages — Freshly published content that isn't indexed yet
  • Updated pages — Significant edits (new sections, title changes, date updates)
  • Fixed pages — After resolving a crawl error, noindex tag, or redirect loop
  • High-priority pages — Pages you need indexed quickly (product launches, time-sensitive content)

Step-by-Step: Request Indexing

  1. Open the URL Inspection Tool and inspect your URL
  2. Review the status — if it says "URL is not on Google" or shows a stale crawl date, proceed
  3. Click the "Request Indexing" button (blue button at the top of the results)
  4. Wait for the confirmation dialog — it will say the request was added to the priority crawl queue
  5. Close the dialog

What happens next: Google adds your URL to a priority crawl queue. Indexing typically happens within a few hours to a few days for new pages, and faster for established sites. You can re-check the URL after 24–72 hours to confirm the index status updated.

Important: The Request Indexing button is rate-limited. As of 2024, Google limits each property to approximately 10–12 indexing requests per day via the tool. For bulk submissions, use a sitemap instead.

Alternative: Submit a Sitemap

For new sites or bulk indexing needs, submitting your sitemap is more efficient than requesting individual URLs:

  1. Go to Sitemaps in the left sidebar
  2. Enter your sitemap URL (typically yourdomain.com/sitemap.xml)
  3. Click Submit

Google will crawl your sitemap and discover all URLs. This doesn't guarantee immediate indexing but signals all pages as crawlable.


How to Use URL Inspection as a Google Index Checker

Want to know if a specific page is indexed? The URL Inspection Tool is the most accurate way to check — more reliable than site: searches, which don't show every indexed URL.

Index Status Messages

| Status | Meaning | Action Needed | |--------|---------|---------------| | URL is on Google | Page is indexed and eligible for search | None — page is live | | URL is not on Google | Not indexed | Check the reason below | | URL is on Google, but has issues | Indexed but has warnings | Review warnings (e.g., schema errors) | | Excluded | Intentionally or accidentally excluded | Review specific exclusion reason |

Common "Excluded" Reasons and Fixes

Crawled — currently not indexed Google crawled the page but chose not to index it. Usually a quality or duplication signal.

  • Fix: Improve content depth, ensure the page has unique value, remove thin/duplicate content

Discovered — currently not indexed Google found the URL (via sitemap or links) but hasn't crawled it yet.

  • Fix: Request indexing, improve internal linking to the page, verify crawl budget isn't exhausted

Blocked by robots.txt Your robots.txt file is preventing Googlebot from accessing this URL.

  • Fix: Check /robots.txt for a Disallow rule covering this path; remove or adjust it

noindex tag detected A <meta name="robots" content="noindex"> or X-Robots-Tag: noindex header is on the page.

  • Fix: Remove the noindex directive and request re-indexing

Canonical points to a different page Google selected a different URL as the canonical — often the result of duplicate content.

  • Fix: Review your canonical tags; ensure the rel=canonical link points to the correct URL

Soft 404 The page returns a 200 status code but appears to have no content (empty results page, "no results found", etc.)

  • Fix: Add meaningful content or return a proper 404/410 status

Redirect The URL redirects to another URL.

  • Review: Is this redirect correct? If yes, the destination URL should be indexed instead

How to Get Google to Crawl Your Website

If Google isn't crawling your site regularly, these steps help:

1. Check Your Crawl Stats

Go to Settings → Crawl stats in GSC. This shows:

  • Total crawl requests per day
  • Total download size
  • Average response time
  • Crawl errors

A healthy site sees regular daily crawling. If crawl frequency dropped, check for server errors or dramatic response time increases.

2. Verify Your Sitemap Is Submitted and Indexed

In Sitemaps, your submitted sitemaps should show a "Success" status. If Google reports 0 URLs discovered from your sitemap, check:

  • Is the sitemap URL accessible without authentication?
  • Does it contain the correct URLs (not staging/dev URLs)?
  • Is it properly formatted XML?

Pages with no internal links get crawled less frequently. Make sure every important page on your site has at least one internal link from another crawled page.

4. Fix Server Errors

If Googlebot encounters repeated 5xx errors, it will crawl your site less. In the Coverage report, filter by "Server error (5xx)" to see affected URLs.

5. Remove Crawl Budget Waste

Large sites may have crawl budget issues. Common budget wasters:

  • Faceted navigation (e.g., /products?color=red&size=M) generating thousands of near-duplicate URLs
  • Session IDs in URLs
  • Duplicate content across www vs. non-www versions
  • Internal search result pages indexed by Google

Block these with robots.txt Disallow rules or noindex meta tags.


Interpreting the URL Inspection Results

The "Coverage" Section

The Coverage section is the most important part of the inspection result. It shows:

User-declared canonical: The rel=canonical tag in your page's <head> Google-selected canonical: The URL Google actually uses in its index

If these don't match, Google has overridden your canonical declaration. This usually means Google found a "better" canonical (often due to duplicate content on multiple URLs). To reclaim control:

  • Make your preferred URL the most internally linked version
  • Ensure your preferred URL loads fastest
  • Verify the alternate URLs have proper canonicals pointing back to your preferred URL

The "Enhancements" Section

Shows structured data validity, mobile usability, and Core Web Vitals status for the inspected URL. Common issues:

  • Structured data errors: Missing required fields → fix the schema on that specific page
  • Mobile usability: Viewport not set, tap targets too small, content wider than screen
  • Core Web Vitals: LCP too slow, CLS too high → performance work needed

The Crawled Page Screenshot

The rendered screenshot shows you what Google actually sees when it renders your page. Key things to check:

  • Does the main content appear in the screenshot? (If not, it might be loaded via JavaScript that Googlebot can't execute)
  • Do images appear? (Blocked images can be a signal issue)
  • Is the page layout intact?

Bulk URL Submission: Using the Indexing API

For sites that need to index many URLs quickly (news sites, job boards, ecommerce), Google offers the Indexing API — a programmatic way to notify Google when URLs are added or removed.

The Indexing API is officially documented for job posting and livestream pages, but many SEOs use it for broader indexing requests. You'll need:

  1. A Google Cloud project with the Indexing API enabled
  2. A service account with "Owner" permissions on your GSC property
  3. API calls to https://indexing.googleapis.com/v3/urlNotifications:publish

For most content sites, the URL Inspection Tool + sitemap submissions are sufficient. The Indexing API is only worth the setup complexity for high-velocity sites publishing 50+ new URLs per day.


URL Inspection Tool Limitations

| Limitation | Workaround | |------------|------------| | Rate limit: ~10 requests/day | Use sitemaps for bulk submissions | | Not real-time | Crawling happens within hours to days after request | | Can't inspect external URLs | Only works for verified properties | | Screenshot may differ from live site | Check live test (green "Test Live URL" button) | | No bulk inspection | Use Coverage report for site-wide view |

Test Live URL vs. Cached Version

At the top of inspection results, you can toggle between Google's cached version (what's actually indexed) and Live Test (what Google would see if it crawled right now). Use Live Test after making page changes to confirm your updates are visible before requesting re-indexing.


Quick Reference: URL Inspection Workflow

Use this workflow when a page isn't appearing in search:

1. Open URL Inspection → inspect the specific URL
2. Status = "URL is on Google"?
   → YES: Page is indexed. Issue is ranking, not indexing.
   → NO: Continue below ↓

3. Check "Why URL is not on Google":
   - robots.txt blocked? → Fix robots.txt, re-test
   - noindex tag? → Remove noindex, request indexing
   - Soft 404? → Add content or return proper 404
   - Crawled not indexed? → Improve content quality
   - Discovered not indexed? → Request indexing + add internal links

4. Check canonical:
   - User-declared ≠ Google-selected? → Fix canonical tags

5. Request Indexing → wait 24-72h → re-inspect

URL Inspection Tool vs. Coverage Report: When to Use Each

The URL Inspection Tool and the Coverage (Index) report serve different purposes — a common confusion point.

URL Inspection Tool: Use it when you need to diagnose a specific URL. You enter one URL and get Google's exact record for that page: index status, last crawl, canonical, structured data, rendered screenshot. It's page-level forensics.

Coverage Report: Use it when you need to understand site-wide patterns. It groups all your URLs into status buckets (Valid, Excluded, Error, Warning) and lets you filter by error type across thousands of pages. You can't render screenshots here, but you can spot patterns — like "312 pages have noindex tags" or "89 pages are soft 404s."

The typical workflow: Coverage report to find which pages have issues → URL Inspection Tool to understand why a specific page has that issue and test the fix.


JavaScript Rendering and the URL Inspection Tool

If your site uses JavaScript-heavy frameworks (React, Next.js, Vue, Angular), the URL Inspection Tool is especially important — because Googlebot renders JavaScript, but not always in the way you'd expect.

When you inspect a JS-rendered URL, pay close attention to the crawled page screenshot. If your main content doesn't appear in the screenshot — even though it looks fine in your browser — it means Googlebot couldn't execute the JavaScript that renders that content.

Common causes:

  • JavaScript blocked by robots.txt (e.g., Disallow: /_next/static/)
  • External scripts that load content (Google may not execute third-party fetches)
  • Content that requires user interaction to render (click, scroll, hover events)
  • Errors in the rendering process (check browser console for clues)

Testing fix: After updating your JS rendering setup, use "Test Live URL" in the URL Inspection results to see what Google would see right now — before submitting for re-indexing. This saves a crawl request if the fix didn't work.

Note for Next.js / SSR sites: Server-side rendered content generally crawls without issues since the HTML is pre-rendered. Client-side-only rendering is where problems occur. If you're seeing rendering issues, switching to SSR or SSG for key pages is a more permanent fix than working around Googlebot's limitations.


Diagnosing "Crawled — Currently Not Indexed" at Scale

"Crawled — currently not indexed" is the most frustrating URL Inspection status because it means Google found your page and crawled it — but decided not to include it in the index. The URL Inspection Tool won't tell you the exact reason, because Google doesn't expose it. But the patterns are predictable.

The 4 most common causes:

1. Thin or duplicate content If the page has under 300 words of unique text, covers the same topic as another page on your site, or is largely templated (e.g., auto-generated pages), Google will often crawl and not index.

Fix: Increase the unique content depth on the page. If it's a near-duplicate of another page, consolidate them with a canonical or 301 redirect.

2. No internal links Pages with zero or one internal link signal low importance. Google may crawl them (it followed a link somewhere) but deprioritize them from the index.

Fix: Add 2-3 internal links from high-authority pages on your site to the affected page.

3. Recent publication with low crawl history Brand new pages on new sites sometimes sit in "crawled — currently not indexed" limbo for weeks. Google is evaluating whether the content is worth indexing.

Fix: Request indexing, build internal links, wait. New sites need to demonstrate consistent quality before Google indexes aggressively.

4. Page quality signals flagged during rendering If the rendered version of the page looks different from what a user sees (JS issues, slow load), Google may deprioritize indexing.

Fix: Use "Test Live URL" to compare Google's view to the actual user experience. Fix any rendering gaps.


Frequently Asked Questions

How long does it take for Google to index a URL after I request indexing?

After clicking "Request Indexing," most URLs are crawled within a few hours to 3 days. For new sites with low crawl frequency, it can take up to a week. Sites that publish content regularly tend to get faster re-crawls. Check back after 24 hours using the URL Inspection Tool to see if the crawl date updated.

How many URLs can I submit via the URL Inspection Tool?

Google rate-limits Request Indexing to approximately 10–12 requests per property per day via the URL Inspection Tool interface. For more URLs, submit or resubmit your sitemap (no per-day limit), or use the Indexing API for programmatic bulk submissions.

Is the URL Inspection Tool the same as "Fetch as Google"?

Yes. "Fetch as Google" was the old name in Google Webmaster Tools. It was renamed and upgraded to "URL Inspection Tool" in 2018. The Request Indexing functionality replaces the old "Fetch and Render" + "Submit to Index" workflow.

Why does Google select a different canonical than the one I declared?

Google may override your rel=canonical if it finds stronger signals pointing to a different URL. Common reasons: another version (e.g., http vs. https, www vs. non-www) has more backlinks or internal links; duplicate content exists across multiple URLs; your canonical tag is on a redirected or unavailable URL. Fix by making your preferred URL the strongest signal in all dimensions.

Can I use the URL Inspection Tool to check if my competitor's pages are indexed?

No. The URL Inspection Tool only works for URLs within your verified GSC property. For competitive analysis, use a site:competitor.com/specific-url search as a rough check, or use paid tools like Ahrefs or Semrush that crawl external domains.

My page shows "URL is on Google" but I can't find it in search results. Why?

Being indexed doesn't mean ranking. "URL is on Google" confirms the page is in Google's index and eligible to appear — but Google may rank it on page 10 or beyond. Use the Performance report to check if the page has any impressions and what queries it's appearing for. Low impressions usually mean a ranking issue, not an indexing issue.

What's the difference between "Test Live URL" and the regular inspection result?

The regular inspection result shows Google's cached indexed version — what Google currently has in its index, based on the last crawl. "Test Live URL" fetches your page in real-time, as if Googlebot is crawling it right now. Use "Test Live URL" after making changes to verify Google can see your updates before requesting re-indexing. If Test Live URL shows the fix but the cached version still shows the old state, requesting indexing will update the cache.

Why does the URL Inspection Tool show a different crawl date than expected?

Google doesn't crawl every page at the same frequency. High-authority pages, frequently updated pages, and pages with many incoming internal links get crawled more often. A page that hasn't been crawled in weeks or months may need fresh internal links or a manual re-index request to prompt Googlebot to revisit it.

Can I use URL Inspection for AMP pages?

Yes, but AMP and non-AMP versions are treated as separate URLs. If you have AMP enabled, inspect both versions. GSC may show different statuses for the canonical URL vs. the AMP URL. Make sure your AMP pages have proper rel=amphtml / rel=canonical relationships set up correctly — the URL Inspection results will flag mismatches.


Using URL Inspection as Part of Your Regular SEO Workflow

URL Inspection isn't just for troubleshooting emergencies. Build it into your normal publishing workflow:

  1. After publishing any new page: Inspect the URL → Confirm "URL is not on Google" (expected for new pages) → Request indexing → Check back in 24-48 hours
  2. After significant content updates: Inspect → Use "Test Live URL" to confirm changes visible → Request re-indexing
  3. After fixing technical issues (noindex removal, canonical fix, robots.txt change): Inspect → Verify the fix is reflected in the live test → Validate fix via Coverage report
  4. Monthly audit: Pull your top 20 pages by organic traffic → Inspect each → Confirm crawl dates are recent and no new issues appeared

This 5-minute routine catches issues before they compound into traffic drops.


For the full picture of your site's health in GSC, see: Core Web Vitals in Google Search Console — how to fix LCP, INP, and CLS and Index Coverage Errors — how to fix every error type.

Put These Tips Into Action

Connect your Google Search Console and let our AI find your biggest opportunities.

Get Started Free