/ Crawler Documentation
ThatSEOAgentBot
That SEO Agent uses two distinct request identifiers depending on the operation being performed. This page documents both, explains when each is used, and provides configuration examples for allowing or blocking them.
User agent strings
That SEO Agent makes two types of requests to external URLs. Each uses a different user agent.
ThatSEOAgentBot/1.0 (+https://thatseoagent.com/seo-bot)Used by the crawl_site tool. Performs a full BFS crawl of your site to detect broken links, deep pages, thin content, and duplicate metadata. Only runs when explicitly triggered by a signed-in user via their MCP client.
Mozilla/5.0 (compatible; SEO-MCP-Bot/1.0; +https://thatseoagent.com)Used by page audits and all on-page checks (title, meta, schema, E-E-A-T, GEO score, crawlability, security headers). Sends a single GET or HEAD request per URL per tool call.
What the crawler checks
When crawl_site runs, ThatSEOAgentBot performs a BFS crawl starting from your homepage. For each page it visits, it collects:
- 01HTTP status code and redirect chain
- 02Page title and meta description (for duplicate detection across the site)
- 03Canonical URL and noindex directive
- 04H1 headings
- 05Word count (HTML stripped)
- 06All internal links (used to discover the next pages in the queue)
- 07BFS depth (number of clicks from the homepage)
The bot does not execute JavaScript, render pages, or download assets. It parses raw HTML only.
Crawl behavior
| Concurrency | 3 parallel requests maximum |
| Inter-batch delay | 300ms between batches to avoid overloading your server |
| Timeout per page | 12 seconds hard timeout (site crawler) / 30 seconds (page audit fetcher) |
| Redirects | Followed automatically |
| Content type | HTML only — sends Accept: text/html, application/xhtml+xml |
| Robots.txt | Fetched and parsed before crawling. Skips any URL disallowed for User-agent: * or User-agent: ThatSEOAgentBot |
| In-memory cache | Page audit responses are cached for 60 seconds within a single agent turn. Multiple tools checking the same URL share one HTTP request. |
| Trigger | On-demand only. Crawls and page audits only run when a signed-in user explicitly requests them via their MCP client. |
How to allow ThatSEOAgentBot
If your server or firewall blocks unknown user agents, add an exception for both strings. Examples below.
User-agent: ThatSEOAgentBot Allow: / User-agent: SEO-MCP-Bot Allow: /
# Firewall Rule: Skip for ThatSEOAgentBot # Field: http.user_agent # Operator: contains # Value: ThatSEOAgentBot # Action: Skip # Add a second rule for SEO-MCP-Bot
# Inside your server block
if ($http_user_agent ~* "ThatSEOAgentBot|SEO-MCP-Bot") {
# Remove any rate-limit or block rules
set $skip_limit 1;
}# In .htaccess or VirtualHost
<If "%{HTTP_USER_AGENT} =~ /ThatSEOAgentBot|SEO-MCP-Bot/">
# Exempt from mod_evasive or rate limits
</If>How to block ThatSEOAgentBot
If you do not want That SEO Agent to crawl your site, add a disallow rule to your robots.txt. The crawler reads and respects this file before starting any crawl.
User-agent: ThatSEOAgentBot Disallow: /
Page audit requests (SEO-MCP-Bot) can only be triggered for URLs you own and have connected in your That SEO Agent dashboard. A signed-in user with access to your site would need to initiate them.
Verifying the user agent
Legitimate requests from That SEO Agent always match one of the two user agent strings listed above exactly. Any request claiming to be ThatSEOAgentBot with a different string is not from this service.
- —Site crawler requests include Accept: text/html, application/xhtml+xml
- —Page audit requests use a standard GET or HEAD method
- —All requests originate from Vercel serverless infrastructure (no fixed IP range)
Questions or abuse reports
If you believe That SEO Agent is making requests in violation of these specifications, contact us at support@thatseoagent.com.
/ ThatSEOAgentBot
Audit any page in under 2 minutes.
8 checks. Live data. No copy-pasting.
Connect your site and run your first audit from any MCP-compatible AI.