Understanding the Robots.txt File
A robots.txt file lives at the root of your website and instructs web robots (typically search engine crawlers) on which pages or files they can or cannot request from your site.
This is crucial for managing crawler traffic, preventing the indexing of sensitive or duplicate pages (like admin dashboards, API routes, or internal search results), and keeping your server load optimized.
Important Security Note:
robots.txt is not a security mechanism. Malicious bots can and will ignore these rules. Never use Disallow to hide sensitive data or credentials. Use proper server-side authentication instead.