Elastic rolling restart with Kibana instance running - Index ".kibana_task_manager_1" is in red status and master-0 is not restarted

Hi Team,

Elastic version:7.9.0
ECK: 1.3.0

Setup: 2 master nodes, 3 data nodes

Steps: After creating a secret (GCS bucket Service account key as secret) in k8s all elastic pods are performing a rolling restart.
Use case:
1. Single kibana instance running. I'm seeing couple of issues.

  • Master-0 pod is not restarting.
  • Elastic index ".kibana_task_manager_1" which kibana uses is changed to the red status.

2. Two kibana instance running. I see only one issue

  • Elastic index ".kibana_task_manager_1" which kibana uses is changed to the red status.

3. No Kibana instance running. No issues working fine

Reason for Index red state ".kibana_task_manager_1": /_cluster/allocation/explain?pretty

"deciders" : [
          "decider" : "max_retry",
          "decision" : "NO",
          "explanation" : "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2021-02-02T20:38:09.181Z], failed_attempts[5], failed_nodes[[vLu6DjvbThOuMhICULeDTA, O3YRpWuSTGy06Z5qbUToUw]], delayed=false, details[failed shard on node [O3YRpWuSTGy06Z5qbUToUw]: failed to create shard, failure IOException[failed to obtain in-memory shard lock]; nested: ShardLockObtainFailedException[[.kibana_task_manager_1][0]: obtaining shard lock timed out after 5000ms, previous lock details: [shard creation] trying to lock for [shard creation]]; ], allocation_status[deciders_no]]]"
  1. Is it a recommendation to shutdown kibana before performing rolling update?
  2. Is there a recommendation way to perform rolling update while Kibana is running?
  3. Why master-0 isn't restating when single kibana is running?

Can you please let me know.