Web Crawler Failed HTTP request: Unable to request “< domain >” because it resolved to only private/invalid addresses v2

An issue with the same subject was raised in April 2020..

orhantoyElastic Team Member advised "Yes, that's the current, default behavior and it will become configurable in the next minor release."

Has this become configurable?

Have you tried

crawler.security.dns.allow_loopback_access: true


I think this is now answered here: Add a domain to get started - validate domain - failed - #2 by ross.bell.