How can you remove a URL from Google’s search results?
There are a number of cases where you may not want pages appearing in the SERPS, and this blog post discusses the different ways we can do this.
What kind of content would we not want to appear in the SERPs?There are a number of different types of pages we would not want to be searchable on Google or other search engines.
We may also want to hide pages from Google for a number of reasons including:
How does Google find content to appear in the search results?Before we dive into the different ways we can prevent pages from appearing in the search results, it’s worth understanding the process that Google uses to find and ultimately rank pages. 1) Crawling – This is Google’s way of discovering new content. Using programs, often referred to as spiders or crawlers, Google visits different web pages and follows the links on them to find new pages. Each site has a certain “crawl budget” or amount of resources it allocates to each site. 2) Indexing – Once Google has found the content, it maintains a copy of that content and stores it in what is called an index. 3) Rankings – The ordering of these different pages in the search results is known as ranking. Google gets a query, figures out the search intent behind that query, and then looks to the index to return the best possible results. How can we control what pages rank in the search results?Noindex TagsNoindex tags are a directive which tells Google “I do not want this page to be indexed and therefore do not want it to appear in the search results.”
Noindex tags implemented in the HTML would look something like this: Noindex tags implemented via HTTP header would look like this: CMS platforms, such as WordPress, allow you to add noindex tags to pages, which means you wouldn’t need a developer to implement this. Importantly, Google will need to be able to crawl these pages in order to see the “noindex” tag and then remove the page from it’s index. Blocking in Robots txtRobots.txt is a text file used to instruct web robots how to behave when they visit your site and can be used to dictate to search engine crawlers whether they can or cannot crawl parts of a website. Using robots.txt to block certain page paths such as /admin/, for example, means that Googlebot or other search crawlers won’t even visit these pages – hence they won’t appear in the search results. This can preserve crawl budget for more important pages rather than focusing on less important pages. Note: blocking a page path in robots.txt stops Google from saving the page in the first place, but it doesn’t delete or change what has been saved. Therefore, if a page is already appearing in the search results, then Google has already crawled and then indexed this page. When to block pages in robots.txt – When you have specific page paths or larger sections of your site that you do not want Google to crawl, this is your best bet. Deleting the pageThe most obvious answer, you may have thought, would be to simply delete the page whether that’s by giving it a 404 or a 410 status code. When to delete a page – If the page serves no purpose and has little value in terms of backlinks or traffic it may be worth deleting. If there is some value either from a user perspective or an SEO perspective, consider keeping it with a noindex tag or 301 redirecting to a relevant page. Google Search Console’s Removals ToolGoogle Search Console’s Removal Tool can be used to temporarily block search results from your site for sites that you own on Google Search Console. It’s worth noting that this is not a permanent fix. When to use Google Search Console’s Removals Tool – When you need to get rid of a page quickly. If you need to remove the page permanently, use a noindex tag or give it a 404 or 410 status. Canonical tags
A canonical tag is a snippet of HTML code that lives in the <head> of the page and is used to define the primary version for pages that are similar or duplicates. Canonical tags help prevent issues caused by duplicate or near duplicate content appearing on multiple URLs.
As opposed to noindex tags which are orders, canonical tags can be ignored by Google. Google can still crawl these pages, see the canonical tags, and then decide whether or not the page should appear in the search results or not. When to use canonical tags – Canonical tags should be used when there are several duplicate or similar pages ranking. You will want to canonicalise the non master versions to one primary version of a page to indicate to Google that the master version is the only version you would like in the search results. This will also consolidate the signals from each of these URLs onto the one master page. Final thoughts…There are a number of ways to remove or control what content appears in the search results. The key is ensuring that you are choosing the best option for your particular situation, not attempting to do them all at once! The post How can you remove a URL from Google’s search results? appeared first on Brainlabs. Via Marketing http://www.rssmix.com/via Blogger http://johnjxjackson.blogspot.com/2021/02/how-can-you-remove-url-from-googles.html February 25, 2021 at 06:13AM
0 Comments
Leave a Reply. |