Discovered - Currently Not Indexed: How to Fix them

What is Discovered – Currently Not Indexed? How to Fix them?

Ashkar Gomez
11min read
Table of Contents

Discovered – Currently Not indexed is one of the important Google index coverage issues that impacts indexing, followed by ranking.

Google Search Console, one of the best technical SEO tool, diagnoses and reports in the excluded dashboard. Excluded means, Google states the reason on why a particular web page is not indexed.

When you have small website (less than 10,000 URLs), this issue can be resolved at earliest. In website with more than 10,000+ webpages get impacts on crawl budget.

This issue can also affect other pages that are linked, and also pulls down the overall SEO effort.

Let us deep dive long to research what is the issues, the reasons behind, and the solutions.

What Is Discovered - Currently Not Indexed

Discovered - Currently Not Indexed

As you aware that crawling, rendering, indexing, and ranking are the work of search engines, discovery is the stage when crawler first find a web page through xml sitemap or internal link or backlinks.

When Googlebot (crawler) could only discover a web page but can’t crawl, render, index the page, they are excluded by Google index.

There are multiple reason behind a webpage is being excluded from indexing, post discovered.

The webpages that comes under this same situations are excluded from indexing, and tagged as discovered – currently not indexed.

How to find the pages that are Discovered - Currently Not Indexed?

Google Search Console - Coverage Errors - Excluded

Google search console installation is one of the core technical SEO checklists, as these errors are diagnosed and noticed in GSC.

Under the section coverage, you can see 4 options on the right side. Error, Valid with a warning, Valid, and Excluded. The URL that comes under Valid are indexed.

The URLs or webpages that come under Excluded have many reasons, and discovered – currently not indexed is one among them.

Discovered - Currently Not Indexed Issue in Google Search Console

Once you clicked excluded, just scroll down to check all the issues that exclude web pages from indexing.

You will find the details of the web page that comes under the label discovered currently not indexed, as shown in the above image.

What are the Reasons for Discovered - Currently Not Indexed?

Multiple reasons inhibit crawling and indexing posts from discovering a new web page. Search engines find the page whenever a new webpage is created and submitted in sitemap or through the internal links. This step is known as URL discovery.

After discovering a new URL, the search engine sends its crawlers to crawl the page’s content and understand the relevance. The issue here is that robots are obstructed from crawling by various factors.

  • Blocked by Robots.txt
  • Brand New Website
  • Crawl Issues
  • Internal Link Issues
  • Orphan Pages (No referring Internal Pages)
  • Low Quality Content
  • Duplicate Content
  • Server Errors
  • Internal Errors like 4xx, or Redirect errors 3xx
  • Penalized by Google for involving in Spam Activities
  • A website without a Sitemap

Blocked by Robots.txt

Robots.txt pass the command for the crawlers or robots to crawl specific pages. They use allow and disallow as the directives that help crawlers extract the information on which URL to crawl and the URLs to avoid crawling.

When your URL directives or specific URL is under disallow, there might be a chance of GoogleBot avoiding crawling.

However, Google webmaster guidelines suggest that disallowing a page to crawl doesn’t stop indexing.

Google About Robots.txt and Indexing

Brand New Website

Brand new websites take time to crawl, depending on their crawl budget and priority.

Since new websites have low crawl budgets, the chance of crawling a new page takes time, even after discovering either through sitemap or referring internally.

In this case, you should either increase the frequency of content posting on the website or increase the number of backlinks that passes link juice to your website.

If not the above two cases, invest in Google Website traffic Ads for relevant keywords, and invade traffic. Website traffic ads have practically helped many of our client’s web pages avoid being discovered – currently not indexed.

Internal Linking Issues:

Internal links are an important factor for crawling and indexing any web page. As the internal link helps in the transfer of link juice, and traffic that comes to an indexed page.

But, this improper internal linking and irrelevancy of the anchor text could exclude crawlers from crawling as per the webmaster’s quality rater guidelines.

To avoid this, make sure you have proper website architecture planning, and always use relevant anchor text to link a page.

Orphaned Pages:

When a page is not referred to by any internal web page is termed an orphaned page. The crawling will be excluded from the pages that are not referred by internal pages until referred by the external websites as backlinks.

Low Quality Content:

This could be an important reason for discovered – currently not indexed issue. Low-quality content can be categorized by

  • Thin Content – Web pages with less than 600 words.
  • Cloaking Content – When a webpage has hidden texts under images, videos, or pdf.
  • Auto Generate Content – The content is generated with software or programmatic robots.
  • Keyword Cannibalization – When focusing on the same keyword with two web pages.
  • Web page with floppy and irrelevant intent.

When the web page adheres to the above criteria, it would be considered low-quality content and excludes the web pages from crawling and indexing.

Copy and Duplicate Content:

This could be another important reason for search engines to exclude from crawling and indexing.

Never copy the content from an external website. Google can diagnose the phrase changes.

Be cautious about providing a valid canonical URL or redirect 301 if you have duplicate content or changes made on the URL (destination).

Server Errors:

Server errors 5xx can contribute to discovered – currently not indexed coverage issue.

The reasons can be the following:

  • Server down time
  • shared hosting
  • Overloaded servers
  • Cache in the server

It is advised to choose dedicated servers or cloud servers to reduce server issues.

Penalized by Google:

When a website or web page is mapped under a link scheme, Google penalizes the website and doesn’t allow crawling any website pages.

In another case, when spammy websites refer to your web page, the chance of crawling is disallowed.

Looking to Fix Discovered – Currently not Indexed? We are just away from a Mail!!!

How to fix Discovered - Currently Not Indexed?

The process of fixing these issues is not complicated; you have to follow Google Webmaster Guidelines to maintain the quality of web pages. Here are a few steps you should follow to fix discovered – currently not indexed coverage issue.

Step 1 – Check whether the website has a valid sitemap.xml and if every URL of the website is submitted in the sitemap.xml. If you’re using WordPress CMS, you can generate a sitemap.xml with plugins like Rankmath.

Step 2 – Do an internal link from a relevant page web page that is not crawled. Please link the webpage from the indexed page; this could make crawlers crawl the discovered page.

Step 3 – If you have created a new website, then focus on creating more content, and publish them at regular intervals. This can help increase the crawl budget.

Step 4 – Never leave a web page orphaned.

Step 5 – Focus on creating quality content. Avoid low-count web pages (low count), cloaking, auto-generated, scraped content.

Step 6 – Avoid heavy redirects and broken links for every web page.

Step 7 – Use a canonical tag when you duplicate content (internally). It is avoided to have a direct copy in related content (businesses serving multiple locations).

Step 8 – Use cloud-based hosting to keep high up-time. This will reduce 5xx errors.

Step 9 – Never get a backlink (dofollow) from the websites that practice link schemes, and websites that are spammy. If you diagnose any such backlinks, just disavow them by the Google Search Console tool.

Step 10 – If you find this process tedious, get our technical SEO services. Our expert team will fix them.

Conclusion:

  • Discovered – currently not indexed is an important Google index coverage error that hugely impacts crawl budget and organic rankings.
  • These issues are diagnosed and reported in the Google Search Console.
  • It can be fixed if your website has a proper sitemap.xml and internal links with high relevancy.
  • Besides this, just follow all the rules as per Google webmaster guidelines.
  • Contact us if you still face issues in fixing discovered – currently not indexed.
Ashkar Gomez

Ashkar Gomez

Ashkar Gomez is the Founder of 7 Eagles (a Growth Marketing & SEO Company). Ashkar started his career as a Sales Rep in 2013 and later shifted his career to SEO in 2014. He is one of the leading SEO experts in the industry with 8+ years of experience. He has worked on 200+ projects across 20+ industries in the United States, Canada, the United Kingdom, UAE, Australia, South Africa, and India. Besides SEO and Digital Marketing, he is passionate about Data Analytics, Personal Financial Planning, and Content Writing.

Table of Contents
Scroll to Top