I would like to block some duplicate pages that my script is producing via robots.txt.
I want to block this page: http://www.example.com/cgi-bin/pseek...tegory_widgets
Would this work to block the url from being indexed by search engines?
User-Agent: *
Disallow: /cgi-bin/pseek/dirs.cgilv
Or would it be better to write out the full URL for each page I want to block like this.
User-Agent: *
Disallow: /cgi-bin/pseek/dirs.cgilv=2&ct=category_widgets
Bookmarks