Kibana degraded after upgrade

Hi,

I upgraded my ELK stack running on self-hosted kubernetes. My Elasticsearch cluster is green, but my kibana instance is spewing error logs like:

│ kibana [2023-11-28T15:24:58.959+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":4,"timed_out":false," │
│ kibana [2023-11-28T15:25:01.961+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":4,"timed_out":false," │
│ kibana [2023-11-28T15:25:04.960+00:00][ERROR][plugins.taskManager] Failed to poll for work: ResponseError: {"took":6,"timed_out":false," 
│ etc.

If I go to /status it shows kibana in a yellow state, sometimes flapping to unavailable and back to yellow in the logs.

I saw similar posts to my own, but did not see a resolution to any of them:

FWIW When I upgraded the vast majority of the issues I encountered were related to security stuff. The forced "secure by default" in ELK 8.x+ is very frustrating.

Thankful for any help I can get on this.

The problem was this Upgrade to 8.7.0 hitting cluster_shard_limit_exceeded leads to write blocked index even after shard limit resolution · Issue #155136 · elastic/kibana · GitHub

even though we were nowhere near the limit. Somehow, blocks.write got set to true on an index, specifically: .kibana_task_manager_7.17.1_001. Removing the block fixed the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.