Results 1 to 8 of 8

Thread: Robots.txt - The Basics

  1. #1
    You do realize by 'gay' I mean a man who has sex with other men?
    Join Date
    Oct 2003
    Location
    New Orleans, Louisiana.
    Posts
    21,636

    Robots.txt - The Basics

    By writing a structured text file you can indicate to robots that certain parts of your server are off-limits to some or all robots. It is best explained with an example:

    # robots.txt file for general use on web servers.

    User-agent: webcrawler
    Disallow:

    User-agent: googlebot
    Disallow: /

    User-agent: *
    Disallow: /cgi-bin
    Disallow: /logs
    The first line, starting with '#', specifies a comment.

    The first paragraph specifies that the robot called 'webcrawler' has nothing disallowed: it may go anywhere.

    The second paragraph indicates that the robot called 'googlebot' has all relative URLs starting with '/' disallowed. Because all relative URL's on a server start with '/', this means the entire site is closed off.

    The third paragraph indicates that all other robots should not visit URLs starting with /cgi-bin or /log. Note the '*' is a special token, meaning "any other User-agent"; you cannot use wildcard patterns or regular expressions in either User-agent or Disallow lines.

    Two common errors:

    Wildcards are not supported: instead of 'Disallow: /tmp/*' just say 'Disallow: /tmp'.
    You shouldn't put more than one path on a Disallow line (this may change in a future version of the spec)
    Ultimately, without the use of robots.txt files on your servers/domains, you are risking a variety of potential problems including, unauthorized access to your cgi directory, unauthorized viewing of your site stats, possible spamming of the search engines by accidental crawling of doorway pages.

    One distinct advantage however of having a robots.txt file on your server is that, quite simply, you will be able to tell when and where your site has been indexed or potentially indexed as, all robots will automatically call for the robots.txt file BEFORE any other page on your server so, as long as you keep an eye open for any calls of this file, you can see who is knocking at your site for indexing purposes.

    Below is a robots.txt example that you can copy and paste into a text document to use on your own server:

    <!--Start Copy Below This Line-->

    User-agent: *
    Disallow: /cgi-bin
    Disallow: /logs

    <!--End Copy Above This Line-->

    The above will allow all spiders to crawl all of your site except the subdirectory's 'cgi-bin' and 'logs' which, may be altered to suit any subdirectory's you do not wish the spiders to crawl on your server.

    Article written by Lee.

    http://www.webmasteradvertising.com


  2. #2
    GWW Newbie..Be Nice..
    Join Date
    Nov 2013
    Posts
    28
    Robot.txt file is the main constituent that enables various web crawlers and search bots to cache your website. You can always make use of this file to allow or keep robots from caching your website.


  3. #3
    affordable web design india
    Join Date
    Jan 2014
    Location
    Indore
    Posts
    14
    robots.txt. document may be the main major component that permits different web spiders in addition to seek robots to be able to cache your site. You possibly can constantly take advantage of this document to allow or maybe retain bots via caching your site.


  4. #4
    GWW Newbie..Be Nice..
    Join Date
    May 2015
    Location
    indiaa
    Posts
    29
    Robots.txt is a file in the root directory of your web site that instructs web crawlers what parts, or all, or none of your site they are allowed examine.


  5. #5
    GWW Newbie..Be Nice..
    Join Date
    Nov 2015
    Location
    USA
    Posts
    3
    Good to know about the Robots.txt in detail description.


  6. #6
    GWW Newbie..Be Nice..
    Join Date
    Jun 2017
    Posts
    7
    Good to know about the Robots.txt in detail descriptio
    Girls for Sex, Fuck Sites, Free Sex Chat


  7. #7
    GWW Newbie..Be Nice.. techimpero's Avatar
    Join Date
    Mar 2024
    Location
    New Delhi
    Posts
    6
    A robots.txt file serves as a guide for web crawlers, indicating which areas of a website they can or cannot access. It's a simple text file located at the root directory of a website. It employs a basic syntax: user-agent identifies the crawler, while directives specify permissions. "Disallow" blocks access to specific URLs, while "Allow" permits access. The file can also include comments preceded by "#" symbols. Effective use of robots.txt can optimize a website's crawl budget, directing crawlers to important content and preventing them from indexing irrelevant or sensitive information. However, it's important to note that robots.txt directives are merely suggestions to compliant crawlers and may not be respected by all. Regularly updating and monitoring this file is crucial for effective website management and SEO.


  8. #8
    GWW Newbie..Be Nice.. techimpero's Avatar
    Join Date
    Mar 2024
    Location
    New Delhi
    Posts
    6
    Robots.txt is a text file that tells web robots (like search engine crawlers) which pages or files they can or cannot crawl on a website. It's placed in the root directory of a website and uses simple syntax to specify directives for crawlers. "Disallow" blocks access to certain pages or directories, while "Allow" permits access. It's essential for controlling search engine indexing, managing crawl budget, and protecting sensitive content. However, it's important to note that robots.txt is a guideline, not a strict rule, and some robots may ignore it.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •