Getting Elasticsearch cluster status as RED

Hi Community,
I'm using Elastic stack on k8s and facing ES cluster status as RED but indices are all in green
below is cluster health status
[Elasticsearch@Elasticsearch-master-0 ~]$ curl -XGET 'localhost:9200/_cluster/health?pretty'

[elasticsearch@elasticsearch-master-0 ~]$ curl -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 10,
  "active_shards" : 15,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 2,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 88.23529411764706
}

anyone please help me out from this

Hello @Yogendra_Pratap_Sing

This is due to the active_shards count. Please wait for time to get it assigned.

just execute the below command which will allocate the failed shards

curl -X POST http://127.0.0.1:9200/_cluster/reroute?retry_failed=true

Thanks @sudhagar_ramesh for your response I hit your command and status is

acknowledged":true

but still getting the same status on clusterHealth.

This is indices status

[elasticsearch@elasticsearch-master-0 ~]$ curl -XGET http://localhost:9200/_cat/indices?v
health status index                       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-es-7-2022.05.17 0L1HYNueQF245i3UR6Tljg   1   1      14258         8041     16.8mb          9.5mb
green  open   filebeat-7.17.3             HmPnIwQBQ9iXuJS3kW3-LA   1   0     148830            0     70.2mb         70.2mb
green  open   apm-7.12.1-transaction      6e6rUeowQPOLQJaR8mBeHA   1   0     189045            0      100mb          100mb
green  open   apm-7.12.1-metric           56Xipem5RvCl9c7DHt-NHw   1   0      54095            0       10mb           10mb
green  open   apm-7.12.1-span             MutIu__0Rve4vS72KT6xsg   1   0     215897            0     49.4mb         49.4mb
green  open   apm-7.12.1-error            azc2O_WqT8CpibocRLBbzA   1   0         41            0      415kb          415kb
green  open   .async-search               FgI1zzpTTteRBvC3dQsLaw   1   1          2            0        2mb            1mb

Could you please share the outputs of below command

GET /_cluster/allocation/explain 

GET /_cat/indices?v


I believe this is a one node cluster.

execute the below command which will make the replicas of each index to zero

PUT /*/_settings
{
 "index" : {
  "number_of_replicas":0
 }
}

Here it is what you asked please check

[elasticsearch@elasticsearch-master-0 ~]$ curl -XGET localhost:9200/_cluster/allocation/explain?pretty
{
  "index" : ".ds-ilm-history-5-2022.05.17-000013",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2022-05-17T10:59:47.050Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "8S3wqTdGRHq7a7XWWHEyzQ",
      "node_name" : "elasticsearch-master-0",
      "transport_address" : "172.31.36.250:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    },
    {
      "node_id" : "DbnhybKiRjSc4dgBomSjZw",
      "node_name" : "elasticsearch-master-1",
      "transport_address" : "172.31.30.203:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    },
    {
      "node_id" : "OLvyXR_rTq-H6DqK0OWceQ",
      "node_name" : "elasticsearch-master-2",
      "transport_address" : "172.31.53.196:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    }
  ]
}

Hello @Yogendra_Pratap_Sing

Please execute this command and check the cluster state. Hope this would help you

PUT /*/_settings
{
 "index" : {
  "number_of_replicas":0
 }
}

@sudhagar_ramesh I hit your command and status is

"acknowledged":true

but still getting the same status on clusterHealth.

Hello @Yogendra_Pratap_Sing
i think this index ".ds-ilm-history-5-2022.05.17-000013" is still getting recovered to confirm the same could you execute this command

GET _cat/recovery/event_tracking?v

Also this is a system index . hence no need to worry about the cluster health

@sudhagar_ramesh there is no index named like this

[elasticsearch@elasticsearch-master-0 ~]$ curl -XGET localhost:9200/_cat/recovery/event_tracking?v                                 
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [event_tracking]","resource.type":"index_or_alias","resource.id":"event_tracking","index_uuid":"_na_","index":"event_tracking"}],"type":"index_not_found_exception","reason":"no such index [event_tracking]","resource.type":"index_or_alias","resource.id":"event_tracking","index_uuid":"_na_","index":"event_tracking"},"status":404}

Hello @Yogendra_Pratap_Sing

sorry !!! please mention your index name

GET _cat/recovery/.ds-ilm-history-5-2022.05.17-000013?v

please perform a cluster restart which might fix this internal system index to get allocated to the cluster

Keep Posted!!!

@sudhagar_ramesh could you please know me how can I find index name? I have these indices in my cluster.

[elasticsearch@elasticsearch-master-0 ~]$ curl -XGET http://localhost:9200/_cat/indices?v
health status index                       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-es-7-2022.05.17 0L1HYNueQF245i3UR6Tljg   1   1      14258         8041     16.8mb          9.5mb
green  open   filebeat-7.17.3             HmPnIwQBQ9iXuJS3kW3-LA   1   0     148830            0     70.2mb         70.2mb
green  open   apm-7.12.1-transaction      6e6rUeowQPOLQJaR8mBeHA   1   0     189045            0      100mb          100mb
green  open   apm-7.12.1-metric           56Xipem5RvCl9c7DHt-NHw   1   0      54095            0       10mb           10mb
green  open   apm-7.12.1-span             MutIu__0Rve4vS72KT6xsg   1   0     215897            0     49.4mb         49.4mb
green  open   apm-7.12.1-error            azc2O_WqT8CpibocRLBbzA   1   0         41            0      415kb          415kb
green  open   .async-search               FgI1zzpTTteRBvC3dQsLaw   1   1          2            0        2mb            1mb

@Yogendra_Pratap_Sing could you perform a cluster restart

Yes @sudhagar_ramesh I deleted previous pod and automatically created a new one so I hope this works what you want

1 Like

@sudhagar_ramesh still the same result

When I identify the problem by

curl -XGET localhost:9200/_cluster/allocation/explain?pretty

output is

{
  "index" : ".ds-ilm-history-5-2022.05.17-000013",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2022-05-17T10:59:47.050Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "8S3wqTdGRHq7a7XWWHEyzQ",
      "node_name" : "elasticsearch-master-0",
      "transport_address" : "172.31.46.214:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    },
    {
      "node_id" : "DbnhybKiRjSc4dgBomSjZw",
      "node_name" : "elasticsearch-master-1",
      "transport_address" : "172.31.30.203:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    },
    {
      "node_id" : "OLvyXR_rTq-H6DqK0OWceQ",
      "node_name" : "elasticsearch-master-2",
      "transport_address" : "172.31.53.196:9300",
      "node_attributes" : {
        "ml.machine_memory" : "2147483648",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "1073741824",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "replica_after_primary_active",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        },
        {
          "decider" : "throttling",
          "decision" : "NO",
          "explanation" : "primary shard for this replica is not yet active"
        }
      ]
    }
  ]
}

primary shard for this replica is not yet active

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.