Recovery.gov Search Engine Conspiracy?

CNET reports that the new government website Recovery.gov has been using a special file to prevent being indexed by search engines. The robots.txt file, which has since been removed, contained the following code to prevent Google, Yahoo, and other reputable search engines from indexing any of it’s content:

# Deny all search bots, web spiders
User-agent: *
Disallow: /

The website hailed by Obama and driven by the motivation to have a “transparent” government is attempting to hide the content on it’s website? Seems kind of fishy to me.

Personally, I believe it’s a simple mistake on the part of the developers, and that disallowing search engine traffic via robots.txt is a common practice. However, I think it’s a topic worthy of discussion.

Original CNET article here.

Leave a Reply

Your email address will not be published. Required fields are marked *