Blogspot noindex search label

Blogspot noindex search label - Google Search Console is a free application that allows you to identify, troubleshoot, and resolve any issues that Google may encounter as it crawls and attempts to index your website in search results. If you’re not the most technical person in the world, some of the errors you’re likely to encounter there may leave you scratching your head. We wanted to make it a bit easier, so we put together this handy set of tips about seo, google-search, robots.txt, googlebot to guide you along the way. Read the discuss below, we share some tips to fix the issue about Blogspot noindex search label.Problem :


The default robots.txt of Blogspot is:



User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://castbird-sourcing.blogspot.com/feeds/posts/default?orderby=UPDATED


But when I site:castbird-sourcing.blogspot.com the Google search show something like:



In order to show you the most relevant results, we have omitted some entries very similar to the 32 already displayed.
If you like, you can repeat the search with the omitted results included.


When I expand the result, I see something like:



castbird-sourcing.blogspot.com/search/label/gadget
A description for this result is not available because of this site's robots.txt – learn more.


My questions are:




  1. does this very similar to the 32 already displayed problem harm the
    SEO in general?

  2. shouldn't Googlebot have already ignored everything
    /search? Why do Google still indexing these pages?

  3. How do I completely remove indexed /search link from the Google result?


Solution :


does this very similar to the 32 already displayed problem harm the
seo in general?




Google decides which page to show among those with similar content based on user's query and other algorithms. For example, if a user searches for gadgets and you have a label page for gadgets, that would be a more appropriate result than specific post pages from your blog.



See this page.




Matt Cutts said twice that you should not stress about it, in the
worse non-spammy case, Google may just ignore the duplicate content.
Matt said in the video, “I wouldn’t stress about this unless the
content that you have duplicated is spammy or keyword stuffing.”




.




shouldn't googlebot have already ignored everything /search? Why do
google still indexing these pages?




See this page.




While Google won't crawl or index the content of pages blocked by
robots.txt, we may still index the URLs if we find them on other pages
on the web. As a result, the URL of the page and, potentially, other
publicly available information such as anchor text in links to the
site, or the title from the Open Directory Project (www.dmoz.org), can
appear in Google search results.




.




How do I completely remove indexed /search link from the google
result?




I'm not sure if it works but you can try the method here. However, I would suggest that you leave the pages as they are and they won't cause any harm to your seo. There are many Blogger blogs which perform well in SERPs even though Google has the label search pages added to its database.


If the issue about seo, google-search, robots.txt, googlebot is resolved, there’s a good chance that your content will get indexed and you’ll start to show up in Google search results. This means a greater chance to drive organic search traffic to your site.

Comments

Popular posts from this blog

Years after news site changed name, Google is appending the old name to search titles and news stories

Load Wikipedia sourced biographies via Ajax or render it with the rest of the page as part of the initial request?

SEO: change site address from http://www. to https://