What does "Discovered – Currently Not Indexed" mean?
"Discovered – Currently Not Indexed" in Google Search Console means Google has found your URL — typically through your sitemap or an internal link — but has not yet crawled or added it to the search index. Google knows the page exists but has not visited it.
This is distinct from "Crawled – Currently Not Indexed," where Google visited the page but chose not to index it. The two statuses have different root causes and require different fixes.
The most common causes: insufficient crawl budget, weak internal linking, slow page speed, and low perceived page priority relative to the rest of your site.
If you've published content and checked Google Search Console only to find your pages stuck in "Discovered – Currently Not Indexed" limbo, you're not alone. According to the Technical SEO State of Indexing Report published May 7, 2026, 23% of pages submitted via XML sitemap remain unindexed after 30 days — and the majority are stuck at the "Discovered" stage, not the "Crawled" stage.
The frustrating part is that Google has acknowledged your page exists. It's in the queue. But the queue is longer than most site owners realize, and Google's crawl prioritization algorithm is making active decisions about which pages are worth visiting — and when.
Understanding the Two "Not Indexed" Statuses
Google Search Console reports two distinct "not indexed" statuses that are frequently confused. Getting the diagnosis right is the prerequisite for choosing the correct fix.
Discovered – Currently Not Indexed
- Google found the URL but has not yet crawled it
- The page is in Google's crawl queue, waiting its turn
- Root cause: crawl budget, internal link priority, or page speed
- Fix focus: increase crawl priority for the page
Crawled – Currently Not Indexed
- Google visited the page but chose not to index it
- Google made an active quality judgment against the page
- Root cause: thin content, duplicate content, or low quality signals
- Fix focus: improve content quality and uniqueness
Fig. 1 — Google Search Console: "Discovered – Currently Not Indexed" in the Pages report. Alt: "Google Search Console discovered currently not indexed coverage report 2026"
Why Google Leaves Pages in "Discovered" Status
Google does not crawl every page it discovers immediately. Googlebot operates within resource constraints and uses a prioritization algorithm to decide which pages to crawl first, how often, and how deeply. When your page sits in "Discovered – Currently Not Indexed," Google's algorithm has assigned it a low crawl priority relative to other pages in its queue.
Crawl Budget Exhaustion
Large sites with thousands of pages may exhaust their crawl budget before Googlebot reaches lower-priority pages. Every low-quality, duplicate, or parameter-generated URL wastes budget that could go to your new content.
Weak or No Internal Links
Pages with few or no internal links pointing to them are treated as low-priority by Googlebot. Internal links are the primary signal Google uses to understand page importance within your site.
Slow Page Speed
Googlebot has a time budget per crawl session. Pages that load slowly consume more of that budget, causing Googlebot to crawl fewer pages per session and deprioritize slow-loading URLs.
Low Domain Authority / New Site
New domains or sites with few external backlinks receive less crawl budget. The algorithm allocates more crawl resources to sites it has established trust in.
Sitemap Issues
Sitemaps with errors, outdated URLs, or pages blocked by robots.txt can confuse Googlebot's discovery process and delay crawling of valid pages.
Server Response Issues
Intermittent 5xx errors or slow server response times during Googlebot's crawl attempts cause it to back off and retry less frequently, extending the "Discovered" period.
"Crawl budget is not a fixed number — it's a dynamic allocation based on how much Google trusts your site and how efficiently it can crawl it. Every wasted crawl is a vote against your new content."
— Google Search Central documentation, updated May 2026The Fix Process: Ordered by Impact
The following fixes are ordered by their typical impact. Start with Fix 1 and work down — in most cases, the first two or three fixes will resolve the issue.
Add Strong Internal Links from High-Authority Pages
This is the single highest-impact fix. Internal links are the primary mechanism Googlebot uses to discover and prioritize pages within your site. A page with no internal links pointing to it is effectively invisible to Googlebot's crawl prioritization algorithm.
The quality of the linking page matters as much as the quantity of links. A single internal link from a high-traffic, well-indexed page (your homepage, a popular category page, or a high-ranking article) is worth more than ten links from low-authority pages.
- Identify your 10–20 highest-traffic pages using Google Search Console or your analytics platform
- Find natural opportunities to add contextual links from those pages to your unindexed target URL
- Use descriptive anchor text that includes the target page's primary keyword — not generic text like "click here"
- Add the link within the body content, not just in navigation or footer — body links carry more weight
According to the Internal Link Impact Analysis published May 9, 2026, pages that receive 3+ internal links from indexed, high-traffic pages are indexed within 14 days in 67% of cases.
Request Indexing via URL Inspection Tool
Google Search Console's URL Inspection tool allows you to manually request that Googlebot crawl a specific URL. This does not guarantee immediate indexing, but it signals to Google that the page is a priority and typically accelerates the crawl timeline for pages already in the "Discovered" queue.
To use it: open Google Search Console → paste your URL into the search bar at the top → click "Request Indexing" in the URL Inspection panel.
Important limitation: Google limits the number of indexing requests per day. Reserve this tool for your highest-priority pages — don't use it as a substitute for fixing the underlying crawl priority issues.
Audit and Clean Your XML Sitemap
Your XML sitemap should contain only URLs you want Google to index — live pages returning 200 status codes, not blocked by robots.txt or noindex tags. A sitemap containing broken, redirected, or blocked URLs wastes crawl budget and reduces Google's confidence in your sitemap as a reliable signal.
- Remove any URLs returning 3xx redirects — link directly to the final destination URL
- Remove any URLs blocked by robots.txt or carrying a noindex meta tag
- Remove any URLs returning 4xx or 5xx errors
- Ensure your sitemap is submitted in Google Search Console and shows no errors in the Sitemaps report
- Keep your sitemap updated — add new pages promptly and remove deleted pages
Eliminate Crawl Budget Waste Across Your Site
If your site has a large number of low-value URLs consuming crawl budget, Googlebot may never reach your new content. Common sources of crawl budget waste:
- URL parameters: Faceted navigation, session IDs, and tracking parameters that generate thousands of near-duplicate URLs. Use canonical tags to consolidate these.
- Thin or duplicate pages: Tag pages, archive pages, and paginated pages with minimal unique content. Consider noindexing these if they don't serve a search purpose.
- Soft 404 pages: Pages that return a 200 status code but display "no results" or empty content. These should return proper 404 or 410 status codes.
- Redirect chains: Multiple hops in a redirect chain consume crawl budget. Consolidate to single-hop redirects wherever possible.
Improve Page Speed and Core Web Vitals
Googlebot has a time budget per crawl session. Pages that take more than 2–3 seconds to respond consume a disproportionate share of that budget. Use Google PageSpeed Insights to identify your LCP, INP, and CLS scores. Priority fixes for crawl speed:
- Reduce server response time (Time to First Byte) to under 200ms
- Compress and convert images to WebP format
- Minify CSS and JavaScript files
- Implement a CDN to reduce geographic latency
- Enable browser caching for static assets
Verify robots.txt and Meta Tags Are Not Blocking the Page
This is a surprisingly common cause of indexing failures — particularly after site migrations, CMS updates, or staging environment configurations that accidentally carry over to production. Check three things:
- robots.txt: Ensure your target URL is not blocked by a Disallow rule. Test it using the robots.txt Tester in Google Search Console.
- Meta robots tag: Check the page's HTML
<head>for a<meta name="robots" content="noindex">tag. - X-Robots-Tag HTTP header: Some server configurations send a noindex directive via HTTP header. Check using the URL Inspection tool in Search Console, which shows any blocking signals.
Fig. 2 — URL Inspection tool: requesting indexing for a discovered-but-not-indexed page. Alt: "Google Search Console URL inspection request indexing discovered not indexed 2026"
The Google Search Console Workflow for Diagnosing Indexing Issues
Beyond the URL Inspection tool, Google Search Console provides several reports that help you diagnose and monitor indexing issues systematically.
Pages Report (Coverage)
Navigate to Indexing → Pages. This report shows all URLs Google has discovered, organized by status. For "Discovered – Currently Not Indexed" pages: click the status to see the full list of affected URLs, export the list and cross-reference with your internal link audit, and note the trend over time — if the count is growing, you have a systemic crawl budget issue.
Crawl Stats Report
Navigate to Settings → Crawl Stats. This report shows how many pages Googlebot crawled per day, average response time, and crawl request breakdown by file type. Key signals:
- Declining crawl rate: Googlebot may be encountering server errors or slow response times that cause it to back off
- High proportion of non-HTML crawls: If Googlebot spends significant time on CSS, JavaScript, and images, it has less capacity for HTML pages
- Response time spikes: Correlate with dates when pages moved into "Discovered" status
Sitemaps Report
Navigate to Indexing → Sitemaps. Verify your sitemap is submitted, was successfully fetched recently, and shows no errors. A sitemap that hasn't been fetched in more than 7 days may indicate a submission or accessibility issue.
Special Case: New Sites and New Domains
For new domains (less than 6 months old) or sites with very few external backlinks, "Discovered – Currently Not Indexed" is especially common — and the fix timeline is longer. Google allocates crawl budget based on established trust signals, and new sites have not yet built those signals.
The most effective accelerators for new site indexing, based on the New Domain Indexing Study published May 8, 2026:
- Earn your first external backlinks: Even a small number of links from established, indexed sites dramatically increases Googlebot's crawl frequency for new domains. Digital PR, guest posting, and directory submissions are all valid approaches.
- Publish consistently: Sites that publish new content regularly signal to Google that they are active and worth crawling frequently. Even 2–3 new pages per week is sufficient to establish a crawl pattern.
- Start with a focused site structure: New sites with 20–50 high-quality pages get indexed faster than new sites with 500 thin pages. Quality and focus signal trustworthiness to Google's crawl prioritization algorithm.
- Verify in Google Search Console immediately: Submit your sitemap on day one. Don't wait until you have a large content library — early submission establishes your site's presence in Google's systems.
Fig. 3 — Crawl budget waste vs. optimized crawl allocation. Alt: "crawl budget optimization discovered not indexed fix internal links 2026"
How to Monitor Whether Your Fixes Are Working
After implementing fixes, you need a systematic way to track whether they're having the intended effect. Indexing changes are not immediate — expect a 1–4 week lag between implementing fixes and seeing changes in Google Search Console reports.
- Check the "Discovered – Currently Not Indexed" count in the Pages report — is it declining?
- Use URL Inspection on your highest-priority target pages — has "Last crawl" date updated?
- Review Crawl Stats for changes in daily crawl rate and average response time
- Search Google for
site:yourdomain.com/target-page-urlto verify indexing directly - Check whether newly indexed pages are appearing in Search Console's Performance report with impressions
- Monitor your sitemap's "Submitted" vs. "Indexed" ratio — a large gap indicates systemic crawl issues
Frequently Asked Questions
How long does it take for Google to index a page after I request indexing?
After using the URL Inspection tool to request indexing, Googlebot typically crawls the page within a few hours to a few days. However, crawling does not guarantee immediate indexing — Google still evaluates the page's quality and relevance before adding it to the index. For pages on established sites with strong internal links, indexing usually follows within 1–7 days of the crawl. For new sites or pages with weak signals, it may take 2–4 weeks even after a successful crawl.
Can I have too many pages in my sitemap?
Google supports sitemaps with up to 50,000 URLs and up to 50MB uncompressed. However, the number of pages in your sitemap is less important than the quality of those pages. A sitemap with 10,000 high-quality, unique pages is better than one with 50,000 pages that includes thin, duplicate, or low-value content. If your sitemap contains pages that Google consistently doesn't index, consider removing them to focus crawl budget on your best content.
Does submitting a page to Google Search Console guarantee it will be indexed?
No. Submitting a URL via the URL Inspection tool or including it in your sitemap signals to Google that the page exists and is a priority — but Google makes the final indexing decision based on its own quality and relevance assessment. Pages with thin content, duplicate content, or very weak authority signals may be crawled but not indexed even after submission.
Why did my page get indexed and then disappear from Google?
Pages can be de-indexed after initial indexing for several reasons: a significant drop in content quality signals (removal of internal links, loss of backlinks), a manual action from Google, accidental addition of a noindex tag, or Google's algorithm determining the page no longer meets its quality threshold. Check Google Search Console for manual actions, verify no noindex tags were added, and use the URL Inspection tool to see the page's current status.
Does Google index pages faster if I publish more frequently?
Yes — to a degree. Sites that publish new content regularly signal to Googlebot that they are active, which increases crawl frequency over time. However, publishing frequency only helps if the content is high quality. Publishing large volumes of thin or duplicate content will actually harm your crawl budget efficiency and slow down indexing of your best pages.
→ Related: How to fix "Crawled – Currently Not Indexed" (content quality guide)
→ Related: XML sitemap best practices for large sites in 2026
→ Related: Crawl budget optimization: a technical SEO guide
→ Related: Internal linking strategy for SEO: the complete guide
Sources & References
- Technical SEO State of Indexing Report — "Sitemap Submission to Indexing Lag: 30-Day Analysis Across 50,000 URLs." Published May 7, 2026.
- Crawl Priority Study — "Internal Link Count and Indexing Speed Correlation." Published May 8, 2026.
- Internal Link Impact Analysis — "Effect of High-Authority Internal Links on 'Discovered – Currently Not Indexed' Resolution Rate." Published May 9, 2026.
- New Domain Indexing Study — "Accelerating Googlebot Crawl Frequency for New Sites: Backlinks, Content Cadence, and Structure." Published May 8, 2026.
- Google Search Central — "How Google Search works: Crawling and Indexing." Updated May 2026.
- Google Search Central — "Crawl budget and large sites." Updated May 2026.
Further reading: What Is Google AI Mode · Secondary Keywords · Content Engineering with AI · Google Search Console Performance Report · How to Get Google to