txt file is then parsed and will instruct the robotic as to which pages aren't to be crawled. For a internet search engine crawler may perhaps retain a cached copy of the file, it might occasionally crawl webpages a webmaster does not want to crawl. Pages normally prevented from getting crawled contain login-specific webpages such as searching cart