The robots.txt file is primarily used to specify which parts of your website should be crawled by spiders or web crawlers. It can specify different rules for different spiders.
Googlebot is an example of a spider. It’s deployed by Google to crawl the Internet and record information about websites so it knows how high to rank different websites in search results.
Using a robots.txt file with your website is a web standard. Spiders look for the robots.txt file in the host directory (or main folder) of your website.