Unassigned.reason CLUSTER_RECOVERED

My health status is only yellow instead of green.

{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 22,
  "active_shards" : 22,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 4,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 84.61538461538461
}

So I checked my shards:

GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
index                                                         shard prirep state      node  unassigned.reason
0002.attachments                                              0     r      UNASSIGNED       CLUSTER_RECOVERED
0001.attachments                                              0     r      UNASSIGNED       CLUSTER_RECOVERED
0001.metadata                                                 0     r      UNASSIGNED       CLUSTER_RECOVERED
0002.metadata                                                 0     r      UNASSIGNED       CLUSTER_RECOVERED
.ds-ilm-history-5-2023.02.01-000014                           0     p      STARTED    FM11L 
.ds-ilm-history-5-2023.01.02-000012                           0     p      STARTED    FM11L 
.kibana-event-log-8.2.0-000010                                0     p      STARTED    FM11L 
.geoip_databases                                              0     p      STARTED    FM11L 
.kibana_task_manager_8.2.0_001                                0     p      STARTED    FM11L 
.kibana_8.2.0_001                                             0     p      STARTED    FM11L 
.tasks                                                        0     p      STARTED    FM11L 
0002.attachments                                              0     p      STARTED    FM11L 
.security-7                                                   0     p      STARTED    FM11L 
0001.attachments                                              0     p      STARTED    FM11L 
0001.metadata                                                 0     p      STARTED    FM11L 
.kibana-event-log-8.2.0-000007                                0     p      STARTED    FM11L 
.kibana-event-log-8.2.0-000009                                0     p      STARTED    FM11L 
.kibana_security_session_1                                    0     p      STARTED    FM11L 
.kibana-event-log-8.2.0-000008                                0     p      STARTED    FM11L 
.ds-ilm-history-5-2022.11.26-000010                           0     p      STARTED    FM11L 
.apm-agent-configuration                                      0     p      STARTED    FM11L 
.ds-.logs-deprecation.elasticsearch-default-2023.03.03-000017 0     p      STARTED    FM11L 
.apm-custom-link                                              0     p      STARTED    FM11L 
0002.metadata                                                 0     p      STARTED    FM11L 
.ds-ilm-history-5-2023.03.03-000016                           0     p      STARTED    FM11L 
.ds-.logs-deprecation.elasticsearch-default-2023.02.01-000015 0     p      STARTED    FM11L 

There I found unassigned shards.
What is interesting to me is, that I find the unassignet indexes later down also as assigned to FM11L.

Why I find the same indexes as assigned and as unassigned?
How can I fix that situation?

You're calling the cat shards api, which is showing you the allocation of all shards along with their index names.
For example, the index 0001.attachments has two shards. One primary and one replica. The primary shard is allocated properly as seen further down in your output, but the replica shard is the one having issues allocating.

Take a look at the allocation explain api any time you need insight into allocation issues.

example:

GET _cluster/allocation/explain?pretty
{
  "index": "0001.metadata",
  "shard": 0,
  "primary": false
}

This will give you better information as to why the shard isn't allocating properly.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.