Web Crawler

A web crawler, also known as a spider or spiderbot (e.g. Googlebot, Bingbot), is a bot that accesses sites across the internet systematically to determine what each website page’s content is about and collect data about each site to give back to search engines. The information they collect via crawling is then used to index pages in what is essentially the search engine’s address book of the internet. Unless given other directions via a ‘nofollow’ attribute, the crawlers will follow all of the hyperlinks on your pages and then follow the hyperlinks on those pages to make their way around the internet.