When I run the crawler with only one domain configured, it runs successfully, even this domain has thousands of pages.
After that, I delete the domain, add a new one and start the crawler again.
This works fine, but it's a very manual and unrealistic process.
So if I add multiple domains (the same ones that worked before), the crawler always fails. What is the reason? The crawler should work the same for one or multiple domains.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.