Is it necessary to use Robots.txt?

Status
Not open for further replies.
9 comments
No, Google and others will still crawl your site without it. Robots.txt and the robots meta tag are most useful when you want to disallow indexing specific pages and/or directories
 
I wasn't aware of this warning from Google, it's better to be safe than sorry - I should be more clear that robots.txt is not only for disallowing but yeah effectively, I've always included it by default as just good practice but I didn't know Google had become so restrictive. Guess now they have all the control, the barriers to entry must keep rising. I remember a time when you only had to put a page online and see lots of traffic. Nowadays with all the work we have to put into SEO it's almost like we're doing the search engines work for them.
 
Use it if crawling of your content is causing issues on your server. For example, you may want to disallow crawling of infinite calendar scripts. You should not use the robots.txt to block private content (use server-side authentication instead), or handle canonicalization. If you must be certain that a URL is not indexed, use the robots meta tag or X-Robots-Tag HTTP header instead.

 
Last edited:
yes, Robots.txt is really important for not letting search engine for crawling the specific page of websites.
 
The robots exclusion protocol (REP), or robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl and index pages on their website.
So i think it is necesary to use robots.txt if you whant to improve you SEO
 
I think robots.txt is necessary for a website. With this file, you can control which file or with part of your website directory will be indexed by search engine. By this way, you can also control which files or categories will be visible by searchengine, in order to navigate them across the web :)
 
Status
Not open for further replies.
Back
Top