---
title: "What happens if Monsidobot is disallowed in robots.txt?"
date: "2026-03-10T14:54:17+00:00"
summary:
image:
type: "article"
url: "/web-governance/help/96466-what-happens-if-monsidobot-disallowed-robotstxt"
id: "a3971bbc-0482-45fe-aa84-d226070f9b66"
---

Why do I see requests from Monsidobot even though the link is disallowed it in robots.txt?
------------------------------------------------------------------------------------------

The request from Monsidobot is made because Acquia Web Governance must verify the status of the link. Unless you are a client, Acquia has no interest in the content of your website and will not save any data nor use it in any way outside of the response code for verification of the status of the link.

Advanced information about Monsidobot requests
----------------------------------------------

*   Monsidobot only respects `crawl-delay` on `robots.txt` if the _connections per minute_ setting on the domain is NOT set.
*   If the _connections per minute_ setting IS set for the domain, the crawler ignores the crawl-delay on robots.txt and always uses that setting.
*   If the _connections per minute_ is NOT set, then the crawler looks for crawl-delay in the user agent `monsidobot`. If that does not exist, then crawler looks for `googlebot`.
*   Crawl-delay is capped at max 10 seconds, and lowest at 2 seconds.

To summarize, Monsidobot never goes below 2 seconds in order to avoid overwhelming the servers, even if users request less, and Monsidobot never waits more than 10 seconds to continue the crawl, even if a site is overly-conservative. Sites that request between 2 and 10 seconds are respected exactly, outside of that range the system overrides the request.