Elasticsearch continuously on yellow Status - Unassigned Shards

Hello all!
Hope you are doing well

Cluster status in yellow with unassigned replica shards

Elasticsearch version 6.8.23

Cluster status:

[root@d38-pan020 ~]# es_cluster.sh health
{
  "cluster_name" : "__pan_cluster__",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 6,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 340,
  "active_shards" : 485,
  "relocating_shards" : 0,
  "initializing_shards" : 32,
  "unassigned_shards" : 171,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 70.49418604651163
}

Unassigned shards:

pan_20230303_68_traffic_017507002992-0 0  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 1  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 10 r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 11 r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 2  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 3  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 4  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 5  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 6  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 7  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 8  r UNASSIGNED                              
pan_20230303_68_traffic_017507002992-0 9  r UNASSIGNED  

Reason for unassigned shards:


{
  "index" : "pan_20230303_68_traffic_017507002992-0",
  "shard" : 11,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2023-03-07T08:31:22.629Z",
    "details" : "node_left [FbYhm730RuKShJVmjgxBpA]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "throttled",
  "allocate_explanation" : "allocation temporarily throttled",
  "node_allocation_decisions" : [
    {
      "node_id" : "FbYhm730RuKShJVmjgxBpA",
      "node_name" : "017507002995-2",
      "transport_address" : "127.0.0.1:9525",
      "node_attributes" : {
        "serial" : "017507002995"
      },
      "node_decision" : "throttled",
      "deciders" : [
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of incoming shard recoveries [32], cluster setting [cluster.routing.allocation.node_concurrent_incoming_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "5Q5h0fs2SlOIoUSUQJvxtQ",
      "node_name" : "017507002992-1",
      "transport_address" : "127.0.0.1:9422",
      "node_attributes" : {
        "serial" : "017507002992"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[pan_20230303_68_traffic_017507002992-0][11], node[5Q5h0fs2SlOIoUSUQJvxtQ], [P], s[STARTED], a[id=fyP_jsEaQHmbcM07-zcF1Q]]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "MJGvQmiYSq2jzMTcZgiapQ",
      "node_name" : "017507002992-2",
      "transport_address" : "127.0.0.1:9522",
      "node_attributes" : {
        "serial" : "017507002992"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"017507002992-1 OR 017507002995-2\"]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "W7yxCZm3TO6-qU8Tf10nbA",
      "node_name" : "017507002995-1",
      "transport_address" : "127.0.0.1:9425",
      "node_attributes" : {
        "serial" : "017507002995"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"017507002992-1 OR 017507002995-2\"]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    }
  ]
}

{
  "index" : "pan_20230303_68_traffic_017507002992-0",
  "shard" : 2,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2023-03-07T08:31:22.629Z",
    "details" : "node_left [FbYhm730RuKShJVmjgxBpA]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "throttled",
  "allocate_explanation" : "allocation temporarily throttled",
  "node_allocation_decisions" : [
    {
      "node_id" : "FbYhm730RuKShJVmjgxBpA",
      "node_name" : "017507002995-2",
      "transport_address" : "127.0.0.1:9525",
      "node_attributes" : {
        "serial" : "017507002995"
      },
      "node_decision" : "throttled",
      "deciders" : [
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of incoming shard recoveries [32], cluster setting [cluster.routing.allocation.node_concurrent_incoming_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "5Q5h0fs2SlOIoUSUQJvxtQ",
      "node_name" : "017507002992-1",
      "transport_address" : "127.0.0.1:9422",
      "node_attributes" : {
        "serial" : "017507002992"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[pan_20230303_68_traffic_017507002992-0][2], node[5Q5h0fs2SlOIoUSUQJvxtQ], [P], s[STARTED], a[id=3jD7cYW-RZecJjS_6JNvuA]]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "MJGvQmiYSq2jzMTcZgiapQ",
      "node_name" : "017507002992-2",
      "transport_address" : "127.0.0.1:9522",
      "node_attributes" : {
        "serial" : "017507002992"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"017507002992-1 OR 017507002995-2\"]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    },
    {
      "node_id" : "W7yxCZm3TO6-qU8Tf10nbA",
      "node_name" : "017507002995-1",
      "transport_address" : "127.0.0.1:9425",
      "node_attributes" : {
        "serial" : "017507002995"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"017507002992-1 OR 017507002995-2\"]"
        },
        {
          "decider" : "throttling",
          "decision" : "THROTTLE",
          "explanation" : "reached the limit of outgoing shard recoveries [32] on the node [5Q5h0fs2SlOIoUSUQJvxtQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=32] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
        }
      ]
    }
  ]
}

What could be the problem here?

Thank you very much in advance!

Elasticsearch version 6.8.23 is EOL and no longer supported. Please upgrade ASAP.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.