Elastic Shards are Not Balanced Across Nodes

Hi Team,

Initially we had made a cluster of two nodes having six shards. Later we decided to add one more node to cluster, after adding node we can see that the cluster is in green state but the shards are not distributed uniformly across nodes [node1(3 shard),node2(2 shard),node3(1 shard)].

Could you please let us know if it fine or do we need to perform anything.

As per my knowledge shards should uniformly distributed. Please correct me if I am wrong.

Thanks,
Debasis

What version of Elasticsearch are you running?

What is the full output of the cat shards API?

What is the full output of the cluster stats API?

@Christian_Dahlqvist Thanks for response. We are using Elastic 8.9.2.

Please find cluster stats as below.

GET /_cluster/stats

{
  "_nodes": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "cluster_name": "elasticdemo",
  "cluster_uuid": "Fjxj0fPmTpGywf8TNPQTsg",
  "timestamp": 1715752440054,
  "status": "green",
  "indices": {
    "count": 59,
    "shards": {
      "total": 137,
      "primaries": 82,
      "replication": 0.6707317073170732,
      "index": {
        "shards": {
          "min": 2,
          "max": 6,
          "avg": 2.3220338983050848
        },
        "primaries": {
          "min": 1,
          "max": 6,
          "avg": 1.3898305084745763
        },
        "replication": {
          "min": 0,
          "max": 2,
          "avg": 0.9322033898305084
        }
      }
    },
    "docs": {
      "count": 1467348256,
      "deleted": 259081764
    },
    "store": {
      "size_in_bytes": 1398000668508,
      "total_data_set_size_in_bytes": 1398000668508,
      "reserved_in_bytes": 0
    },
    "fielddata": {
      "memory_size_in_bytes": 59560,
      "evictions": 0,
      "global_ordinals": {
        "build_time_in_millis": 5563
      }
    },
    "query_cache": {
      "memory_size_in_bytes": 76334782,
      "total_count": 3049674,
      "hit_count": 2929874,
      "miss_count": 119800,
      "cache_size": 1006,
      "cache_count": 1419,
      "evictions": 413
    },
    "completion": {
      "size_in_bytes": 0
    },
    "segments": {
      "count": 784,
      "memory_in_bytes": 0,
      "terms_memory_in_bytes": 0,
      "stored_fields_memory_in_bytes": 0,
      "term_vectors_memory_in_bytes": 0,
      "norms_memory_in_bytes": 0,
      "points_memory_in_bytes": 0,
      "doc_values_memory_in_bytes": 0,
      "index_writer_memory_in_bytes": 4460328,
      "version_map_memory_in_bytes": 68166,
      "fixed_bit_set_memory_in_bytes": 4992,
      "max_unsafe_auto_id_timestamp": 1715731207070,
      "file_sizes": {}
    },
    "mappings": {
      "total_field_count": 20210,
      "total_deduplicated_field_count": 13678,
      "total_deduplicated_mapping_size_in_bytes": 89668,
      "field_types": [
        {
          "name": "alias",
          "count": 89,
          "index_count": 2,
          "script_count": 0
        },
        {
          "name": "binary",
          "count": 2,
          "index_count": 2,
          "script_count": 0
        },
        {
          "name": "boolean",
          "count": 289,
          "index_count": 30,
          "script_count": 0
        },
        {
          "name": "constant_keyword",
          "count": 9,
          "index_count": 3,
          "script_count": 0
        },
        {
          "name": "date",
          "count": 592,
          "index_count": 45,
          "script_count": 0
        },
        {
          "name": "date_range",
          "count": 6,
          "index_count": 6,
          "script_count": 0
        },
        {
          "name": "double",
          "count": 85,
          "index_count": 8,
          "script_count": 0
        },
        {
          "name": "flattened",
          "count": 87,
          "index_count": 7,
          "script_count": 0
        },
        {
          "name": "float",
          "count": 165,
          "index_count": 17,
          "script_count": 0
        },
        {
          "name": "geo_point",
          "count": 33,
          "index_count": 4,
          "script_count": 0
        },
        {
          "name": "half_float",
          "count": 56,
          "index_count": 14,
          "script_count": 0
        },
        {
          "name": "integer",
          "count": 233,
          "index_count": 13,
          "script_count": 0
        },
        {
          "name": "ip",
          "count": 222,
          "index_count": 6,
          "script_count": 0
        },
        {
          "name": "keyword",
          "count": 9362,
          "index_count": 45,
          "script_count": 0
        },
        {
          "name": "long",
          "count": 4590,
          "index_count": 40,
          "script_count": 0
        },
        {
          "name": "match_only_text",
          "count": 333,
          "index_count": 4,
          "script_count": 0
        },
        {
          "name": "nested",
          "count": 80,
          "index_count": 14,
          "script_count": 0
        },
        {
          "name": "object",
          "count": 3468,
          "index_count": 43,
          "script_count": 0
        },
        {
          "name": "scaled_float",
          "count": 27,
          "index_count": 7,
          "script_count": 0
        },
        {
          "name": "short",
          "count": 203,
          "index_count": 1,
          "script_count": 0
        },
        {
          "name": "text",
          "count": 187,
          "index_count": 24,
          "script_count": 0
        },
        {
          "name": "version",
          "count": 9,
          "index_count": 9,
          "script_count": 0
        },
        {
          "name": "wildcard",
          "count": 83,
          "index_count": 4,
          "script_count": 0
        }
      ],
      "runtime_field_types": []
    },
    "analysis": {
      "char_filter_types": [],
      "tokenizer_types": [],
      "filter_types": [],
      "analyzer_types": [],
      "built_in_char_filters": [],
      "built_in_tokenizers": [],
      "built_in_filters": [],
      "built_in_analyzers": []
    },
    "versions": [
      {
        "version": "8.9.2",
        "index_count": 59,
        "primary_shard_count": 82,
        "total_primary_bytes": 1396829132432
      }
    ],
    "search": {
      "total": 2787771,
      "queries": {
        "match_phrase": 133,
        "bool": 2774664,
        "terms": 1137449,
        "prefix": 790,
        "match": 51138,
        "match_phrase_prefix": 14,
        "match_all": 5,
        "exists": 1416622,
        "range": 1200339,
        "term": 1677863,
        "nested": 94,
        "simple_query_string": 219183,
        "wildcard": 77
      },
      "sections": {
        "highlight": 93,
        "stored_fields": 180,
        "runtime_mappings": 9398,
        "query": 2775037,
        "script_fields": 180,
        "terminate_after": 323,
        "_source": 24773,
        "pit": 2721,
        "fields": 9295,
        "collapse": 10936,
        "aggs": 168693
      }
    }
  },
  "nodes": {
    "count": {
      "total": 3,
      "coordinating_only": 0,
      "data": 3,
      "data_cold": 3,
      "data_content": 3,
      "data_frozen": 3,
      "data_hot": 3,
      "data_warm": 3,
      "index": 0,
      "ingest": 3,
      "master": 3,
      "ml": 3,
      "remote_cluster_client": 3,
      "search": 0,
      "transform": 3,
      "voting_only": 0
    },
    "versions": [
      "8.9.2"
    ],
    "os": {
      "available_processors": 30,
      "allocated_processors": 30,
      "names": [
        {
          "name": "Linux",
          "count": 3
        }
      ],
      "pretty_names": [
        {
          "pretty_name": "Red Hat Enterprise Linux Server 7.6 (Maipo)",
          "count": 2
        },
        {
          "pretty_name": "Red Hat Enterprise Linux",
          "count": 1
        }
      ],
      "architectures": [
        {
          "arch": "amd64",
          "count": 3
        }
      ],
      "mem": {
        "total_in_bytes": 100696489984,
        "adjusted_total_in_bytes": 100696489984,
        "free_in_bytes": 18603053056,
        "used_in_bytes": 82093436928,
        "free_percent": 18,
        "used_percent": 82
      }
    },
    "process": {
      "cpu": {
        "percent": 2
      },
      "open_file_descriptors": {
        "min": 734,
        "max": 846,
        "avg": 794
      }
    },
    "jvm": {
      "max_uptime_in_millis": 6819164854,
      "versions": [
        {
          "version": "20.0.2",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "20.0.2+9-78",
          "vm_vendor": "Oracle Corporation",
          "bundled_jdk": true,
          "using_bundled_jdk": true,
          "count": 3
        }
      ],
      "mem": {
        "heap_used_in_bytes": 10799082224,
        "heap_max_in_bytes": 21147680768
      },
      "threads": 337
    },
    "fs": {
      "total_in_bytes": 2049917026304,
      "free_in_bytes": 651357851648,
      "available_in_bytes": 560368238592
    },
    "plugins": [],
    "network_types": {
      "transport_types": {
        "security4": 3
      },
      "http_types": {
        "security4": 3
      }
    },
    "discovery_types": {
      "multi-node": 3
    },
    "packaging_types": [
      {
        "flavor": "default",
        "type": "rpm",
        "count": 3
      }
    ],
    "ingest": {
      "number_of_pipelines": 35,
      "processor_stats": {
        "append": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "convert": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "csv": {
          "count": 30,
          "failed": 0,
          "current": 0,
          "time_in_millis": 68
        },
        "date": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "dot_expander": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "drop": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "fingerprint": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "foreach": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "geoip": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "grok": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "join": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "json": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "pipeline": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "remove": {
          "count": 9,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "rename": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "script": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "set": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "set_security_user": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "split": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "uri_parts": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        },
        "user_agent": {
          "count": 0,
          "failed": 0,
          "current": 0,
          "time_in_millis": 0
        }
      }
    },
    "indexing_pressure": {
      "memory": {
        "current": {
          "combined_coordinating_and_primary_in_bytes": 0,
          "coordinating_in_bytes": 0,
          "primary_in_bytes": 0,
          "replica_in_bytes": 0,
          "all_in_bytes": 0
        },
        "total": {
          "combined_coordinating_and_primary_in_bytes": 0,
          "coordinating_in_bytes": 0,
          "primary_in_bytes": 0,
          "replica_in_bytes": 0,
          "all_in_bytes": 0,
          "coordinating_rejections": 0,
          "primary_rejections": 0,
          "replica_rejections": 0
        },
        "limit_in_bytes": 0
      }
    }
  },
  "snapshots": {
    "current_counts": {
      "snapshots": 0,
      "shard_snapshots": 0,
      "snapshot_deletions": 0,
      "concurrent_operations": 0,
      "cleanups": 0
    },
    "repositories": {}
  }
}

Please shards details.

GET /_cat/shards

.security-profile-8                                           0 p STARTED         1   8.5kb 10.10.18.174 cb-2
.security-profile-8                                           0 r STARTED         1   8.5kb 10.10.18.59  cb-1
.ds-.logs-deprecation.elasticsearch-default-2024.04.26-000009 0 p STARTED         1   9.5kb 10.10.18.174 cb-2
.ds-.logs-deprecation.elasticsearch-default-2024.04.26-000009 0 r STARTED         1   9.4kb 10.10.18.59  cb-1
.apm-custom-link                                              0 p STARTED         0    248b 10.10.18.174 cb-2
.apm-custom-link                                              0 r STARTED         0    248b 10.10.18.59  cb-1
.async-search                                                 0 p STARTED         0    256b 10.10.18.174 cb-2
.async-search                                                 0 r STARTED         0    256b 10.10.18.59  cb-1
.monitoring-beats-7-2024.05.12                                0 p STARTED     10080   6.3mb 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.12                                0 r STARTED     10080   6.3mb 10.10.18.59  cb-1
.monitoring-es-7-2024.05.12                                   0 p STARTED    323340 160.2mb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.12                                   0 r STARTED    323340 160.2mb 10.10.18.59  cb-1
.ds-ilm-history-5-2024.04.26-000008                           0 p STARTED        18  22.5kb 10.10.18.174 cb-2
.ds-ilm-history-5-2024.04.26-000008                           0 r STARTED        18  22.5kb 10.10.18.59  cb-1
.internal.alerts-security.alerts-default-000001               0 r STARTED         0    248b 10.10.18.174 cb-2
.internal.alerts-security.alerts-default-000001               0 p STARTED         0    248b 10.10.18.215 cb-3
.kibana_security_solution_8.9.2_001                           0 p STARTED         3  37.1kb 10.10.18.174 cb-2
.kibana_security_solution_8.9.2_001                           0 r STARTED         3  37.1kb 10.10.18.59  cb-1
.internal.alerts-observability.slo.alerts-default-000001      0 p STARTED         0    248b 10.10.18.174 cb-2
.internal.alerts-observability.slo.alerts-default-000001      0 r STARTED         0    248b 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.13                               0 p STARTED     17280     7mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.13                               0 r STARTED     17280     7mb 10.10.18.59  cb-1
.fleet-file-data-agent-000001                                 0 p STARTED         0    248b 10.10.18.215 cb-3
.fleet-file-data-agent-000001                                 0 r STARTED         0    248b 10.10.18.59  cb-1
.monitoring-beats-7-2024.05.09                                0 p STARTED     10080   6.3mb 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.09                                0 r STARTED     10080   6.3mb 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.09                               0 p STARTED     17278   7.1mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.09                               0 r STARTED     17278   7.1mb 10.10.18.59  cb-1
.kibana_ingest_8.9.2_001                                      0 p STARTED       345 536.2kb 10.10.18.215 cb-3
.kibana_ingest_8.9.2_001                                      0 r STARTED       345 536.2kb 10.10.18.59  cb-1
elastictemp7                                                  0 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp7                                                  1 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp7                                                  2 p STARTED         0    247b 10.10.18.59  cb-1
elastictemp7                                                  3 p STARTED         0    247b 10.10.18.59  cb-1
elastictemp7                                                  4 p STARTED         0    247b 10.10.18.174 cb-2
elastictemp7                                                  5 p STARTED         0    247b 10.10.18.174 cb-2
.ds-.logs-deprecation.elasticsearch-default-2024.03.27-000008 0 p STARTED         4  34.1kb 10.10.18.174 cb-2
.ds-.logs-deprecation.elasticsearch-default-2024.03.27-000008 0 r STARTED         4  34.1kb 10.10.18.59  cb-1
.monitoring-beats-7-2024.05.10                                0 r STARTED     10080   6.2mb 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.10                                0 p STARTED     10080   6.2mb 10.10.18.215 cb-3
.monitoring-kibana-7-2024.05.12                               0 p STARTED     17278   7.1mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.12                               0 r STARTED     17278   7.1mb 10.10.18.59  cb-1
.kibana_8.9.2_001                                             0 p STARTED       281   159kb 10.10.18.174 cb-2
.kibana_8.9.2_001                                             0 r STARTED       281 170.8kb 10.10.18.59  cb-1
.ds-.kibana-event-log-8.9.2-2024.04.11-000006                 0 p STARTED         0    247b 10.10.18.174 cb-2
.ds-.kibana-event-log-8.9.2-2024.04.11-000006                 0 r STARTED         0    247b 10.10.18.59  cb-1
.apm-source-map                                               0 r STARTED         0    248b 10.10.18.174 cb-2
.apm-source-map                                               0 p STARTED         0    248b 10.10.18.215 cb-3
.apm-source-map                                               0 r STARTED         0    248b 10.10.18.59  cb-1
.internal.alerts-observability.metrics.alerts-default-000001  0 p STARTED         0    248b 10.10.18.174 cb-2
.internal.alerts-observability.metrics.alerts-default-000001  0 r STARTED         0    248b 10.10.18.59  cb-1
.monitoring-es-7-2024.05.13                                   0 p STARTED    323305 158.2mb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.13                                   0 r STARTED    323305 158.2mb 10.10.18.59  cb-1
emp-000008                                                    0 p STARTED         0    247b 10.10.18.215 cb-3
emp-000008                                                    0 r STARTED         0    247b 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.11                               0 p STARTED     17280     7mb 10.10.18.215 cb-3
.monitoring-kibana-7-2024.05.11                               0 r STARTED     17280     7mb 10.10.18.59  cb-1
sfw230725                                                     0 p STARTED         1   9.9kb 10.10.18.174 cb-2
sfw230725                                                     1 p STARTED         1   9.8kb 10.10.18.59  cb-1
sfw230725                                                     2 p STARTED         1  15.4kb 10.10.18.215 cb-3
sfw230725                                                     3 p STARTED         3  17.8kb 10.10.18.59  cb-1
.ds-filebeat-8.9.2-2023.11.28-000001                          0 r STARTED         0    247b 10.10.18.174 cb-2
.ds-filebeat-8.9.2-2023.11.28-000001                          0 p STARTED         0    247b 10.10.18.215 cb-3
.ds-.kibana-event-log-8.9.2-2024.02.26-000004                 0 p STARTED         1   6.2kb 10.10.18.174 cb-2
.ds-.kibana-event-log-8.9.2-2024.02.26-000004                 0 r STARTED         1   6.2kb 10.10.18.59  cb-1
elastic                                                       0 p STARTED 244187560 216.7gb 10.10.18.215 cb-3
elastic                                                       1 p STARTED 244194327 216.7gb 10.10.18.215 cb-3
elastic                                                       2 p STARTED 244195103 216.6gb 10.10.18.215 cb-3
elastic                                                       3 p STARTED 244205833 216.7gb 10.10.18.59  cb-1
elastic                                                       4 p STARTED 244179014 216.1gb 10.10.18.174 cb-2
elastic                                                       5 p STARTED 244182659 216.7gb 10.10.18.174 cb-2
.monitoring-es-7-2024.05.11                                   0 r STARTED    323306 160.6mb 10.10.18.174 cb-2
.monitoring-es-7-2024.05.11                                   0 p STARTED    323306 160.6mb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.09                                   0 p STARTED    323340 159.2mb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.09                                   0 r STARTED    323340 159.2mb 10.10.18.59  cb-1
.internal.alerts-observability.uptime.alerts-default-000001   0 p STARTED         0    248b 10.10.18.215 cb-3
.internal.alerts-observability.uptime.alerts-default-000001   0 r STARTED         0    248b 10.10.18.59  cb-1
.monitoring-beats-7-2024.05.14                                0 p STARTED     10080     7mb 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.14                                0 r STARTED     10080     7mb 10.10.18.59  cb-1
.security-7                                                   0 p STARTED       143 414.6kb 10.10.18.215 cb-3
.security-7                                                   0 r STARTED       143 414.6kb 10.10.18.59  cb-1
.internal.alerts-observability.logs.alerts-default-000001     0 r STARTED         0    248b 10.10.18.174 cb-2
.internal.alerts-observability.logs.alerts-default-000001     0 p STARTED         0    248b 10.10.18.215 cb-3
.monitoring-es-7-2024.05.15                                   0 p STARTED     82816  49.1mb 10.10.18.174 cb-2
.monitoring-es-7-2024.05.15                                   0 r STARTED     82780    48mb 10.10.18.215 cb-3
.monitoring-beats-7-2024.05.13                                0 p STARTED     10080   6.2mb 10.10.18.215 cb-3
.monitoring-beats-7-2024.05.13                                0 r STARTED     10080   6.2mb 10.10.18.59  cb-1
.ds-ilm-history-5-2024.03.27-000006                           0 p STARTED        32    44kb 10.10.18.174 cb-2
.ds-ilm-history-5-2024.03.27-000006                           0 r STARTED        32    44kb 10.10.18.59  cb-1
.ds-.kibana-event-log-8.9.2-2024.01.27-000003                 0 p STARTED        14  84.6kb 10.10.18.215 cb-3
.ds-.kibana-event-log-8.9.2-2024.01.27-000003                 0 r STARTED        14  84.6kb 10.10.18.59  cb-1
.monitoring-es-7-2024.05.10                                   0 p STARTED    323343 159.4mb 10.10.18.174 cb-2
.monitoring-es-7-2024.05.10                                   0 r STARTED    323343 159.4mb 10.10.18.59  cb-1
.ds-logs-data-stream-test-2024.01.10-000007                   0 p STARTED         0    247b 10.10.18.215 cb-3
.ds-logs-data-stream-test-2024.01.10-000007                   0 r STARTED         0    247b 10.10.18.59  cb-1
.fleet-files-agent-000001                                     0 p STARTED         0    248b 10.10.18.215 cb-3
.fleet-files-agent-000001                                     0 r STARTED         0    248b 10.10.18.59  cb-1
.kibana_alerting_cases_8.9.2_001                              0 p STARTED         1   6.7kb 10.10.18.215 cb-3
.kibana_alerting_cases_8.9.2_001                              0 r STARTED         1   6.7kb 10.10.18.59  cb-1
.kibana_security_session_1                                    0 p STARTED         1   6.6kb 10.10.18.215 cb-3
.kibana_security_session_1                                    0 r STARTED         1   6.6kb 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.10                               0 p STARTED     17280     7mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.10                               0 r STARTED     17280     7mb 10.10.18.59  cb-1
.kibana_task_manager_8.9.2_001                                0 r STARTED        25 164.5kb 10.10.18.174 cb-2
.kibana_task_manager_8.9.2_001                                0 p STARTED        25 161.2kb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.14                                   0 p STARTED    330770 180.9mb 10.10.18.215 cb-3
.monitoring-es-7-2024.05.14                                   0 r STARTED    330770 179.3mb 10.10.18.59  cb-1
.apm-agent-configuration                                      0 p STARTED         0    248b 10.10.18.174 cb-2
.apm-agent-configuration                                      0 r STARTED         0    248b 10.10.18.59  cb-1
.monitoring-beats-7-2024.05.15                                0 p STARTED      4811   4.9mb 10.10.18.215 cb-3
.monitoring-beats-7-2024.05.15                                0 r STARTED      4813   4.5mb 10.10.18.59  cb-1
.ds-ilm-history-5-2024.02.26-000004                           0 p STARTED        34  44.4kb 10.10.18.174 cb-2
.ds-ilm-history-5-2024.02.26-000004                           0 r STARTED        34  44.4kb 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.15                               0 r STARTED      4280     3mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.15                               0 p STARTED      4278   2.9mb 10.10.18.59  cb-1
.internal.alerts-observability.apm.alerts-default-000001      0 p STARTED         0    248b 10.10.18.174 cb-2
.internal.alerts-observability.apm.alerts-default-000001      0 r STARTED         0    248b 10.10.18.59  cb-1
.ds-ilm-history-5-2024.01.27-000003                           0 r STARTED        25  23.6kb 10.10.18.174 cb-2
.ds-ilm-history-5-2024.01.27-000003                           0 p STARTED        25  23.6kb 10.10.18.59  cb-1
.monitoring-kibana-7-2024.05.14                               0 p STARTED     17278   6.8mb 10.10.18.174 cb-2
.monitoring-kibana-7-2024.05.14                               0 r STARTED     17278   6.9mb 10.10.18.59  cb-1
elastictemp9                                                  0 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp9                                                  1 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp9                                                  2 p STARTED         1   9.7kb 10.10.18.59  cb-1
elastictemp9                                                  3 p STARTED         0    247b 10.10.18.59  cb-1
elastictemp9                                                  4 p STARTED         1   9.8kb 10.10.18.174 cb-2
elastictemp9                                                  5 p STARTED         0    247b 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.11                                0 p STARTED     10080   6.3mb 10.10.18.174 cb-2
.monitoring-beats-7-2024.05.11                                0 r STARTED     10080   6.3mb 10.10.18.59  cb-1
elastictemp8                                                  0 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp8                                                  1 p STARTED         0    247b 10.10.18.215 cb-3
elastictemp8                                                  2 p STARTED         1  12.3kb 10.10.18.59  cb-1
elastictemp8                                                  3 p STARTED         2    19kb 10.10.18.59  cb-1
elastictemp8                                                  4 p STARTED         3  25.4kb 10.10.18.174 cb-2
elastictemp8                                                  5 p STARTED         0    247b 10.10.18.174 cb-2
.kibana_analytics_8.9.2_001                                   0 p STARTED        60   2.4mb 10.10.18.174 cb-2
.kibana_analytics_8.9.2_001                                   0 r STARTED        60   2.4mb 10.10.18.59  cb-1

The total shard count is indeed a bit uneven across the nodes (56, 46 and 35). In older versions Elasticsearch used to balance only on shard count, but in more recent versions I believe this has change to also account for disk space used. Is the amount of disk space used on the nodes proportional to the number of shards they hold or more even?

The third node does not have same amount of space as of other two nodes. The disk space and utilization of two nodes which are initially part of the cluster.

[root@cb-3 disk2]# df -kh /disk2/
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/vg-lv01  817G  652G  131G  84% /disk2
[root@cb-2 disk2]# df -kh /disk2/
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/vg-lv01  748G  434G  283G  61% /disk2
[root@cb-2 disk2]#

The below is the space allocated and utilization of new node.

[root@cb-1 disk2]# df -kh /disk2/
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/vg-lv01  345G  218G  110G  67% /disk2
[root@cb-1 disk2]#

Thanks,
Debasis

That may then explain it. Elasticsearch generally assumes that all nodes within a specific tier have the same specification with respect to RAM, CPU and storage, so it is recommended to avoid mixing nodes with different specifications as it may lead to enexpected side effects and imbalance.

Thanks @Christian_Dahlqvist . Now I had two questions.

  1. In node cb-3 still space is avliable and it does not reach any kind of threshold value. So why it stops moving shards.
  2. Now the overall cluster status is green but shards are not uniformly distributed. So will it impact the performance while querying the Index.

Thanks,
Debasis

Node cb-3 is at 84% and the standard first watermark is at 85% so it does make sense to not allocate more shards there.

Not necessarily, but that depends on how shards are distributed and the load on the cluster. It is quite possible that it will not be noticeable, but you are more likely to experience imbalances as the storage space is not uniform.

cb-3 is already part of the cluster and after data ingested it utilized to 84%. cb-1 is the new node which added yesterday and current utilization is 67%. So my concern is why the shards does not moved to cb-1 node.

Thanks,
Debasis

It does have less disk space available compared to the other nodes, so that could be affecting the new shard allocation algorithm. I do not know the internals of the new algorithm, so will need to leave that for someone else.

Thanks @Christian_Dahlqvist

we know that there is enough disk space available on each node to allow even distribution of the shards. Given that, is there a way to trigger ES to rebalance? And could that have been done during the node add itself (so as to avoid 2 rounds of rebalance)?