I upgraded my ELK stack running on self-hosted kubernetes. My Elasticsearch cluster is green, but my kibana instance is spewing error logs like:
│ kibana [2023-11-28T15:24:58.959+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":4,"timed_out":false," │
│ kibana [2023-11-28T15:25:01.961+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":4,"timed_out":false," │
│ kibana [2023-11-28T15:25:04.960+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":6,"timed_out":false,"
│ etc.
If I go to /status it shows kibana in a yellow state, sometimes flapping to unavailable and back to yellow in the logs.
I saw similar posts to my own, but did not see a resolution to any of them:
FWIW When I upgraded the vast majority of the issues I encountered were related to security stuff. The forced "secure by default" in ELK 8.x+ is very frustrating.
even though we were nowhere near the limit. Somehow, blocks.write got set to true on an index, specifically: .kibana_task_manager_7.17.1_001. Removing the block fixed the problem.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.