Elastic does not allocate indices equally

Hello,

I have an issue with Elastic allocation decider. Sometimes Elastic allocates new indices to nodes which are quite full for no reason. On the other nodes there is a plenty of space and it allocates indices on the full ones.

Would you please give me advice how to solve this unbalanced problem?

elasticsearch.yml

---
cluster.name: xxx
node.name: xxx-01
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
node.attr.datacenter: dc1
node.roles: [ data, ingest ]
cluster.routing.allocation.awareness.attributes: datacenter
discovery.seed_hosts: [ ]
cluster.initial_master_nodes: [ ]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/xxx
xpack.security.transport.ssl.truststore.path: certs/xxx

Allocation settings is set to default values.

How many nodes do you have in the cluster? Do you have different zones configured? Are you using shard allocation awareness and/or filtering? Are all nodes using exactly the same version of Elasticsearch?

24 nodes
Im not sure what zones you are talking about.
Im using shard allocation awareness (you can see in my configuration file) - dc1 and dc2
All nodes have the same Elasticsearch version.

Can you show node informationto illustrate the imbalance you are describing?

Here is quite well seen the imbalance.

I have set ILM with rollover (40gb or 7 days) to all indices, so there is not much of size difference between indices.

Can you please provide statistics, e.g. output from the cat nodes API, instead of images that are hard to read and interpret? Are all nodes exactly the same specification? Which Elasticsearch version are you using?

heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
66          99  40    3.75    3.59     3.39 di        -      data-20
63          97  45    4.16    4.04     4.09 di        -      data-24
53          81   2    0.06    0.03     0.05 m         -      master-02
50          98  43    5.24    4.02     3.52 di        -      data-10
40          99  46    4.64    4.30     4.63 di        -      data-17
27          98  58    6.88    5.58     5.33 di        -      data-06
32          91  53    4.78    5.14     5.35 di        -      data-19
56          99  60    4.53    4.62     5.04 di        -      data-11
51          99  39    3.62    3.56     3.85 di        -      data-05
38          98  43    4.74    4.18     4.45 di        -      data-15
68          93  33    2.98    3.18     3.44 di        -      data-08
49          99  48    3.80    4.03     3.98 di        -      data-07
39          88   2    0.20    0.18     0.12 -         -      kibana-03
22          87   3    0.93    0.40     0.20 -         -      kibana-01
31          99  59    5.14    4.97     5.06 di        -      data-13
35          99   1    0.11    0.10     0.08 m         -      master-03
23          99  57    4.60    4.71     4.78 di        -      data-21
 5          88   2    0.43    0.28     0.15 -         -      kibana-02
30          98  49    4.52    4.74     4.65 di        -      data-03
63          99  50    1.88    1.24     1.28 m         *      master-01
55          99  51    6.35    5.14     5.17 di        -      data-02
52          99  56    5.12    4.76     5.06 di        -      data-04
41          96  60    3.17    3.17     3.49 di        -      data-22
24          92  38    3.85    4.39     4.74 di        -      data-14
20          98  45    4.02    4.28     4.45 di        -      data-23
44          99  63    6.93    6.09     5.92 di        -      data-18
53          99  45    3.07    3.18     3.57 di        -      data-01
39          99  45    3.38    3.46     3.44 di        -      data-16
48          97  34    2.91    3.08     3.11 di        -      data-12
31          87   2    0.38    0.27     0.14 -         -      kibana-04
54          91  56    4.21    4.34     4.64 di        -      data-09

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.