Basics of Google bots (robot) – Accessing Site, Content, Blocking

googlebot
As a computer is never complete without a CPU, in the same way the world of Google is also incomplete without the Google bot. It is also known as the spider as by the process known as crawling, it discovers new pages and updates and adds them to the existing Google index. As numerous computers are used to access billions of sites, there are complex set of algorithms which help the Google bot determine what is the frequency of a particular page crawl and the number of pages to be fetched from each site.

The crawling process starts with the list of webpage URLs which were created during the previous crawls. New web pages are discovered, and if any ages were modified, they are also checked for and are added to the Google index. New links are detected, and they are added to the list of pages to be crawled.

  • The way Google Bots access the sites: On an average only once in a few seconds the Google bot should access the site. If there are network delays, this rate of accessing is found higher. Google generally will download lone copy of the page every time, but if it is downloading multiple copies, then the crawler might have stopped and restarted.  In order to improve the performance and the scale due to the growing web content, the Google bot was designed in such a way that it could be distributed on several machines. Many crawlers run near the sites whereas Google Bots are indexing in the network to cut down bandwidth usage. The aim of Google is to crawl as many pages from a site on a particular visit without overwhelming the server’s bandwidth. A request can also be made to change the crawl rate.
  • Blocking Google bot from accessing the content on the site: A web server can never be kept secret without publishing links to it. Once a surfer follows a link from the “secret” server to another web server, the “secret” URL appears in the referrers tag and also it can be stored or published by the other web server on its referrers log. In the same way, there is existence of many outdated and broken links on the web. Sometimes, it so happens that someone publishes an incorrect link to a particular site or does not update the links to show the changes made in the server. This is when Google bot downloads an incorrect link and should be prevented from crawling the content on the site. There are many options to block which also includes robots.txt which blocks the access to the files and the directories on the server. Some additional tips are:
  1. It should be tested that the robot.txt is working as expected. To understand the way Google bot will interpret the content of the robot.txt file, the robots.txt tool on the Blocked URLs page.
  2. To understand how a site appears to the Google bot, the Fetch as Google tool in the Webmaster tools can be used. This is very helpful in troubleshooting the problems during the discover ability of the search results and the content of the site.

Hope, this article has helped you a lot in knowing more about the Google Bots and webmaster tools. Suggestions are always welcome.

Sharing is caring    Share Whatsapp

 
Topics:  Blogging