By writing a structured text file you can indicate to robots that certain parts of your server are off-limits to some or all robots. It is best explained with an example:
# robots.txt file for general use on web servers.
User-agent: webcrawler
Disallow:
User-agent: googlebot
Disallow: /
User-agent: *
Disallow: /cgi-bin
Disallow: /logs
The first line, starting with '#', specifies a comment.
The first paragraph specifies that the robot called 'webcrawler' has nothing disallowed: it may go anywhere.
The second paragraph indicates that the robot called 'googlebot' has all relative URLs starting with '/' disallowed. Because all relative URL's on a server start with '/', this means the entire site is closed off.
The third paragraph indicates that all other robots should not visit URLs starting with /cgi-bin or /log. Note the '*' is a special token, meaning "any other User-agent"; you cannot use wildcard patterns or regular expressions in either User-agent or Disallow lines.
Two common errors:
Wildcards are not supported: instead of 'Disallow: /tmp/*' just say 'Disallow: /tmp'.
You shouldn't put more than one path on a Disallow line (this may change in a future version of the spec)
Ultimately, without the use of robots.txt files on your servers/domains, you are risking a variety of potential problems including, unauthorized access to your cgi directory, unauthorized viewing of your site stats, possible spamming of the search engines by accidental crawling of doorway pages.
One distinct advantage however of having a robots.txt file on your server is that, quite simply, you will be able to tell when and where your site has been indexed or potentially indexed as, all robots will automatically call for the robots.txt file BEFORE any other page on your server so, as long as you keep an eye open for any calls of this file, you can see who is knocking at your site for indexing purposes.
Below is a robots.txt example that you can copy and paste into a text document to use on your own server:
<!--Start Copy Below This Line-->
User-agent: *
Disallow: /cgi-bin
Disallow: /logs
<!--End Copy Above This Line-->
The above will allow all spiders to crawl all of your site except the subdirectory's 'cgi-bin' and 'logs' which, may be altered to suit any subdirectory's you do not wish the spiders to crawl on your server.
Article written by Lee.
http://www.webmasteradvertising.com
Bookmarks