Content Marketing Blog

GoogleBot takes over news crawling

Google has announced that it will no longer be making use of a dedicated bot to crawl sites that make up its news search offering.

Previously the search engine made use of two separate programs to index pages to its databases.

But an official post on the Webmaster Central blog has made it clear that the GoogleBot will now be taking over the task.

The primary user-agent will still respect the use of specific tags entered in a site if they are more restrictive than other measures, meaning little will change for a majority of companies.

Pages indexed by Google News will still be able to follow traffic volumes from the specialised service – however visits from the dedicated crawler will no longer register in a weblog.

Sitemap information will not need to be changed for a majority of companies that provide custom news content – unless the page requires a dedicated log-in, payment or registration feature.

Webmasters are still able to access error reports if they are having troubles with indexing, but the channels used to provide this data have changed and now run through the site tools provided by Google.

Industry commentators are unsure of the reasons behind the online search company's move towards the use of a single bot, with SearchEngineLand's Vanessa Fox asserting: "This change seems like a bit of a step backward to me".

Fox continued by saying: "When Google News was first launched, GoogleBot crawled for both web search and News and news publishers asked for a news-specific bot."

The use of two separate, dedicated crawlers meant that webmasters were able to gain in-depth, granular reports on just what content was accessed by which site at any specific time.

The changes mean that site owners will have less insight into how often a page is indexed – and for what purpose.

Castleford