Control What Your Website Shares With Google
How to control what Google can crawl and show in searches
Everyone wants urls from his website to get indexed faster. However, there might be certain types of content or files, which must not appear in the searches. There may be certain pages that might find their way to Google search results but they must not actually be there. It is because not all pages on your website are worth indexing and at least some confidential files deserve to remain hidden from appearing in search results. You may need to prevent certain pages from getting into searches for several reasons.
Thin or low value content:
If there is some thin content or low value content on your website, you might want these pages to not appear in the searches. There might also be some content that you believe is of no use for your audience and should remain private. In another case, you might like to hide user generated content on your website from search engines. User generated content can be of lower quality or it can also be spam. Such content must be prevented from appearing in search results.
Hide data from Google:
Suppose you have some type of data hosted on your website that you do not want to be publicly visible. SUch data might be restricted to users coming to the site directly. In another cse, you might want to hide the pdf files that are available on your website for download. These types of data or files can be kept private by blocking Google from crawling them.
Make Google focus on more important content:
Sometimes you do want Google to waste the crawling budget on less important pages. You want it to focus on the more important urls and therefore you block Google from crawling the less important ones. Suppose you have a very large website with thousands of pages and there is also a lot of duplicate content on it. You do not want all those duplicate pages getting indexed or crawled. So, you block Google from crawling those duplicate or low quality pages.
In this way, there can be several reasons that you might want to restrict Google’s access to the content on your website.
There are a few simple ways that you can restrict the content on your website from searches,
Remove the content from your website: If there is a lot of duplicate content or outdated and low quality or spam content on your website, you must remove it. That will automatically deindex the content and prevent Google from crawling it in future. You must clean your website of such content from time to time. If you want to keep some of the duplicate content, you can redirect it to a more relevant page which will also ensure that the more important page gets indexed and appears in search results. However, removing content from the website is the best way to ensure that search engines like Google or Bing do not get to crawl it and it does not appear on the internet in search results. It applies to all file types including html and pdf.
Password protect your files: If there is private information on your website that only registered users must be allowed to access, then you must password protect such pages or files to prevent them from being accessed by Google and shown in search results. If such content is already indexed in the searches, you can get it removed by password protecting it. You can password protect any type of file whether it is an html page or a pdf file.
Apply noindex to pages: Another simple method of preventing content from appearing in Google searches is to mark it as noindex. The noindex meta tag tells Google to not index a piece of content or not let it appear in searches. It does not prevent Google from crawling the page. If you want to prevent content from appearing in searches, using the noindex tag can be quite effective. The page will be visible to users directly visiting the page on the website or through links on external sources but any content marked as noindex will not appear directly in Google searches. The noindex tag can be used to prevent any type of content from appearing in search results and you can use it as a meta tag or apply the noindex X Robots tag to content you do not want indexed.
Use disallow rule: The disallow rule is a robots.txt directive obeyed by Google and other search engines. It tells the search crawlers to not crawl the given page or directory. You can block the crawling of entire directories using the disallow rule. While disallowing the crawling of content is not effective in all cases since content linked from external sources will still be indexed, you can use it to prevent your images and videos from being crawled and indexed.
Opt out of Google properties: If your content appears in Google properties like Google hotels, shopping or vacations, you can opt out from Google properties to prevent your property or product from appearing in searches. You can opt out of Google local searches also and within a month of opting out, your property will stop appearing in searches. Opt out applies to individual domains and you will need to opt each domain out separately.
Opt out of display in the place entity feature in page insights: Page insights provide the users additional information about a place or product they are searching on Google. If a significant portion of your webpage discusses a place entity, chances are it might appear in page insights. On the Google App browser for iOS and Android. However, you can opt out of displaying place entity in page insights to users in the European Economic Area. This will stop your site results from appearing in page insights within 30 days.
So, you can see there are several ways that you can use to prevent particular content from appearing in searches. Your crawl budget is limited and it is an important reason that you might want to prevent Google from crawling each and every page and focus on more important pages. You can also prevent confidential information from appearing in searches by password protecting it or by using the no-index tag. Password protecting confidential information is a better option so it is visible only to registered and authorized users. The disallow directive is effective in the case of images and videos but not in all cases. However, the no-index tag is quite effective to prevent pages from appearing in search results.