Discovered - Currently Not Indexed: How to Fix them

What is Discovered – Currently Not Indexed? How to Fix them?

Ashkar Gomez
11min read
Table of Contents

Discovered – Currently Not indexed is one of the most important Google index coverage issues that impact indexing, followed by ranking.

Google Search Console, one of the best technical SEO tools, diagnoses and reports in the excluded dashboard. Excluded means Google states the reason why a particular web page is not indexed.

When you have a small website (less than 10,000 URLs), this issue can be resolved at the earliest. A website with more than 10,000+ webpages gets impacts on the crawl budget.

This issue can also affect other pages that are linked and pull down the overall SEO effort.

Let us deep dive to research what are the issues, the reasons behind them, and their possible solutions.

What Is Discovered - Currently Not Indexed

Discovered - Currently Not Indexed

As you are aware, I’d like to ask you about how search engines work.

Yes, the crawling, rendering, indexing, and ranking are the work of search engines; discovery is the stage when a crawler first finds a web page through an XML sitemap or internal links or backlinks.

When a Googlebot (crawler) can only discover a web page but can’t crawl, render, or index the page, they are excluded by Google Index.

There are multiple reasons why a webpage is excluded from indexing posts discovered.

The webpages that come under this situation are excluded from indexing and tagged as discovered – currently not indexed.

How to find the pages that are Discovered - Currently Not Indexed?

Google Search Console - Coverage Errors - Excluded

Google search console installation is one of the core technical SEO checklists, as these errors are diagnosed and noticed in GSC.

Under the section coverage, you can see 4 options on the right side. Error, Valid with a warning, Valid, and Excluded. The URLs that come under Valid are indexed.

The URLs or webpages that come under Excluded are not indexed. They have many reasons for not being indexed and discovered – currently not indexed is one among them.

Discovered - Currently Not Indexed Issue in Google Search Console

Once you click excluded, just scroll down to check all the issues that exclude web pages from indexing.

You will find the details of the web page that comes under the label discovered – currently not indexed, as shown in the above image.

What are the Reasons for Discovered - Currently Not Indexed?

Multiple reasons inhibit crawling and indexing posts from discovering a new web page. Search engines find the page whenever a new webpage is created and submitted in a sitemap or through internal links. This step is known as URL discovery.

After discovering a new URL, the search engine sends its crawlers to crawl the page’s content and understand its relevance. The issue here is that robots are obstructed from crawling by various factors.

  • Blocked by Robots.txt
  • Brand New Website
  • Crawl Issues
  • Internal Link Issues
  • Orphan Pages (No referring Internal Pages)
  • Low-Quality Content
  • Duplicate Content
  • Server Errors
  • Internal Errors like 4xx, or Redirect errors 3xx
  • Penalized by Google for being involved in Spam Activities
  • A website without a sitemap

Blocked by Robots.txt

Robots.txt passes the command for the crawlers or robots to crawl specific pages. They use allow and disallow as the directives that help crawlers extract the information on which URL to crawl and the URLs to avoid crawling.

When your URL directives or specific URLs are under disallow, there might be a chance of GoogleBot avoiding crawling.

However, Google webmaster guidelines suggest that disallowing a page to crawl doesn’t stop indexing.

Google About Robots.txt and Indexing

Brand New Website

Brand new websites take time to crawl, depending on their crawl budget and priority.

Since new websites have low crawl budgets, the chance of crawling a new page takes time, even after discovering either through sitemap or referring internally.

In this case, you should either increase the frequency of content posting on the website or increase the number of backlinks that pass link juice to your website.

If not the above two cases, invest in Google Website traffic Ads for relevant keywords and invade traffic. Website traffic ads have practically helped many of our client’s web pages avoid being discovered – currently not indexed.

Internal Linking Issues:

Internal links are an important factor for crawling and indexing any web page as the internal link helps in the transfer of link juice and traffic that comes to an indexed page.

However, this improper internal linking and irrelevancy of the anchor text could exclude crawlers from crawling as per the webmaster’s quality rater guidelines.

To avoid this, ensure you have proper website architecture planning and always use relevant anchor text to link a page.

Orphaned Pages:

When a page is not referred to by any internal web page, it is termed an orphaned page. The crawling will be excluded from the pages that are not referred to by internal pages until referred to by the external websites as backlinks.

Low Quality Content:

This could be an important reason for discovered – currently not indexed issue. Low-quality content can be categorized by

  • Thin Content – Web pages with less than 600 words.
  • Cloaking Content – When a webpage has hidden texts under images, videos, or pdf.
  • Auto Generate Content – The content is generated with software or programmatic robots.
  • Keyword Cannibalization – When focusing on the same keyword with two web pages.
  • Web page with floppy and irrelevant intent.

When the web page adheres to the above criteria, it would be considered low-quality content and excludes the web pages from crawling and indexing.

Copy and Duplicate Content:

This could be another important reason for search engines to exclude from crawling and indexing.

Never copy the content from an external website. Google can diagnose the phrase changes.

Be cautious about providing a valid canonical URL or redirect 301 if you have duplicate content or changes made on the URL (destination).

Server Errors:

Server errors 5xx can contribute to discovered – currently not indexed coverage issue.

The reasons can be the following:

  • Server down time
  • shared hosting
  • Overloaded servers
  • Cache in the server

It is advised to choose dedicated servers or cloud servers to reduce server issues.

Penalized by Google:

When a website or web page is mapped under a link scheme, Google penalizes the website and doesn’t allow the crawling of any website pages.

In another case, when spammy websites refer to your web page, the chance of crawling is disallowed.

Looking to Fix Discovered – Currently not Indexed? We are just a step away from a Mail!!!

How to fix Discovered - Currently Not Indexed?

The process of fixing these issues is not complicated; you have to follow Google Webmaster Guidelines to maintain the quality of web pages. Here are a few steps you should follow to fix the discovered – currently not indexed coverage issue.

Step 1 – Check whether the website has a valid sitemap.xml and if every website URL is submitted to sitemap.xml. If you’re using WordPress CMS, you can generate a sitemap.xml with plugins like Rankmath.

Step 2 – Create an internal link from a relevant web page that is not crawled. Link the webpage from the indexed page; this could make crawlers crawl the discovered page.

Step 3 – If you have created a new website, then focus on creating more content and publish them at regular intervals. This can help increase the crawl budget.

Step 4 – Never leave a web page orphaned.

Step 5 – Focus on creating quality content. Avoid low-count web pages, cloaking, auto-generated, and scraped content.

Step 6 – Avoid heavy redirects and broken links for every web page.

Step 7 – Use a canonical tag when you duplicate content (internally). It is avoided to have a direct copy in related content (businesses serving multiple locations).

Step 8 – Use cloud-based hosting to keep high up-time. This will reduce 5xx errors.

Step 9 – Never get a backlink (dofollow) from websites that practice link schemes or websites that are spammy. If you diagnose any such backlinks, disavow them by the Google Search Console tool.

Step 10 – If you find this process tedious, get our technical SEO services. Our expert team will fix them.


  • Discovered – currently not indexed is an important Google index coverage error that hugely impacts crawl budget and organic rankings.
  • These issues are diagnosed and reported in the Google Search Console.
  • It can be fixed if your website has a proper sitemap.xml and internal links with high relevancy.
  • Besides this, just follow all the rules as per Google webmaster guidelines.
  • Contact us if you still face issues in fixing discovered – currently not indexed.
Picture of Ashkar Gomez

Ashkar Gomez

Ashkar Gomez is the Founder of 7 Eagles (a Growth Marketing & SEO Company). Ashkar started his career as a Sales Rep in 2013 and later shifted his career to SEO in 2014. He is one of the leading SEO experts in the industry with 8+ years of experience. He has worked on 200+ projects across 20+ industries in the United States, Canada, the United Kingdom, UAE, Australia, South Africa, and India. Besides SEO and Digital Marketing, he is passionate about Data Analytics, Personal Financial Planning, and Content Writing.

Table of Contents

Related Articles

Your Business
Starts Here

Let’s Have a Cup of Digital Tea

Request Your Free Quote