🤖 Free Tool — No Signup Required

Robots.txt Analyzer & Generator

Analyze your site's crawl rules and sitemap declarations — or generate a valid robots.txt file from scratch.

📄
Raw robots.txt

Want the full picture?

Robots.txt is just one piece. Get a complete SEO audit covering meta tags, schema, speed, mobile, and 50+ other factors — free.

Get Your Free SEO Audit →

⚡ Quick Presets

✅ Allow All

Let all search engines crawl everything. Best for most sites.

🚫 Block AI Crawlers

Block GPTBot, CCBot, Google-Extended, and other AI training bots.

📝 WordPress

Block admin, uploads, and WordPress-specific paths. Allow content.

🚀 SaaS App

Block dashboard, login, API, and app routes. Allow marketing pages.

🛒 E-commerce

Block cart, checkout, account pages. Allow product/category pages.

⚙️ Custom

Start from scratch and build your own robots.txt rules.

🌐 Site Settings

💡 Tip: Always add your sitemap URL at the bottom of robots.txt. This helps all crawlers discover your sitemap automatically, without needing to submit it to each search engine individually.

📋 Crawl Rules

📄 Generated robots.txt


                        
Copied!

🚀 How to Install Your robots.txt

  1. Download your generated robots.txt file using the button above.
  2. Upload it to the root directory of your website (same level as your homepage). The file must be accessible at https://yourdomain.com/robots.txt.
  3. Verify it's live by visiting yourdomain.com/robots.txt in your browser.
  4. Test it in Google Search Console → Settings → robots.txt to ensure Google can read it correctly.

Frequently Asked Questions

What is a robots.txt file and why does it matter for SEO?
A robots.txt file is a text file at the root of your website (yoursite.com/robots.txt) that tells search engine crawlers which pages they can and can't access. It's the first file Google checks before crawling your site. A missing, broken, or misconfigured robots.txt can block Google from indexing your most important pages — or waste your crawl budget on pages that don't matter.
What happens if my site doesn't have a robots.txt file?
If your site has no robots.txt file (returns 404), search engines will crawl everything they can find. This isn't necessarily bad for small sites, but for larger sites it means Google wastes crawl budget on admin pages, duplicate content, and URLs you don't want indexed. A well-configured robots.txt focuses Google's attention on your most valuable pages.
Should I reference my sitemap in robots.txt?
Yes — you should include a Sitemap: directive in your robots.txt pointing to your XML sitemap. This helps search engines discover your sitemap automatically. Example: 'Sitemap: https://yoursite.com/sitemap.xml'. If your robots.txt doesn't include a Sitemap directive, Google may not find your sitemap unless you submit it manually in Search Console.
What does 'Disallow: /' mean in robots.txt?
'Disallow: /' tells search engines to not crawl ANY page on your site. This is the most dangerous robots.txt mistake — it makes your entire site invisible to Google. It's sometimes added accidentally during development and forgotten when the site goes live. If you see this, fix it immediately.
Can robots.txt block pages from appearing in Google?
Robots.txt blocks crawling, not indexing. If a page is linked to from other sites, Google may still index the URL (showing it in results without a snippet). To truly prevent indexing, use a 'noindex' meta tag or X-Robots-Tag header instead. Robots.txt is for controlling crawl behavior, not index behavior.