.Google's John Mueller addressed a question regarding why Google.com indexes pages that are disallowed from creeping by robots.txt and also why the it's risk-free to overlook the associated Search Console documents regarding those creeps.Crawler Traffic To Inquiry Criterion URLs.The person asking the concern documented that robots were actually making links to non-existent inquiry criterion URLs (? q= xyz) to pages with noindex meta tags that are likewise shut out in robots.txt. What motivated the concern is that Google.com is creeping the hyperlinks to those pages, getting blocked out through robots.txt (without seeing a noindex robotics meta tag) then getting turned up in Google.com Look Console as "Indexed, though obstructed through robots.txt.".The individual inquired the complying with concern:." Yet here is actually the big inquiry: why would Google.com mark pages when they can not even see the material? What's the conveniences in that?".Google's John Mueller affirmed that if they can not crawl the web page they can't find the noindex meta tag. He also creates an intriguing reference of the website: hunt driver, urging to neglect the results considering that the "normal" consumers won't view those results.He composed:." Yes, you are actually correct: if our company can't crawl the webpage, our company can't view the noindex. That said, if we can't creep the webpages, at that point there's certainly not a great deal for our team to index. Therefore while you might observe some of those pages along with a targeted internet site:- inquiry, the average consumer won't view all of them, so I definitely would not fuss over it. Noindex is also fine (without robots.txt disallow), it simply implies the URLs are going to find yourself being crawled (as well as wind up in the Search Console file for crawled/not indexed-- neither of these standings induce concerns to the remainder of the internet site). The vital part is that you don't create all of them crawlable + indexable.".Takeaways:.1. Mueller's solution confirms the constraints in using the Internet site: search progressed search operator for diagnostic reasons. Among those causes is actually due to the fact that it's not linked to the normal hunt mark, it's a different trait completely.Google's John Mueller discussed the website search driver in 2021:." The quick answer is that an internet site: query is actually certainly not meant to be total, neither used for diagnostics objectives.A site question is a particular sort of search that confines the end results to a specific web site. It is actually generally just the word internet site, a colon, and afterwards the internet site's domain name.This concern limits the results to a certain web site. It's not implied to be a complete assortment of all the webpages coming from that site.".2. Noindex tag without utilizing a robots.txt is alright for these type of conditions where a robot is connecting to non-existent web pages that are obtaining found through Googlebot.3. Links with the noindex tag will produce a "crawled/not indexed" entry in Look Console and also those will not possess a damaging effect on the remainder of the website.Check out the concern as well as respond to on LinkedIn:.Why would certainly Google.com mark web pages when they can't even view the information?Featured Picture through Shutterstock/Krakenimages. com.