Version: v8.4.3
Steps to reproduce:
Started crawling across 13 different engines.
9 successfully completed but at engine number 10 this was the first log message:
The crawler worker responsible for this crawl has failed. The crawl is suspended automatically and will be resumed by another worker. You can check crawler logs for the details on the underlying issue.
Trying to cancel it via the API has left it stuck, tried restarting the cluster with no success, still in a stuck position.
Is there anything that can be done to restart the Queue Manager?
I'm assuming this is App Search crawler, not the new Enterprise Search crawler? Any when you say stuck, do you have recurring crawls enabled and/or can you trigger a new crawl?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.