The number of shards on my nodes is no longer equal

Hello,
The number of shards on my nodes is no longer equal and I constantly see this kind of information in the logs of the main-2 node. What could be the reason for this?
I am using Elasticsearch and Kibana version 7.4.2.

[2020-10-04T23:59:43,874][INFO ][o.e.c.r.a.DiskThresholdMonitor] [master-2] low disk watermark [85%] exceeded on [NVXXnkg9RcS1LJHbxE4JWw][data-4][/var/lib/elasticsearch/nodes/0] free: 646.7gb[13.9%], replicas will not be assigned to this node
[2020-10-04T23:59:43,874][INFO ][o.e.c.r.a.DiskThresholdMonitor] [master-2] low disk watermark [85%] exceeded on [U8_onApfRnG7fpToARusIw][data-7][/var/lib/elasticsearch/nodes/0] free: 606.3gb[13.5%], replicas will not be assigned to this node
[2020-10-04T23:59:43,874][INFO ][o.e.c.r.a.DiskThresholdMonitor] [master-2] low disk watermark [85%] exceeded on [vKA6qG2WRr2VJnK4wNVYOA][data-2][/var/lib/elasticsearch/nodes/0] free: 555.4gb[12.4%], replicas will not be assigned to this node
[2020-10-04T23:59:43,874][INFO ][o.e.c.r.a.DiskThresholdMonitor] [master-2] low disk watermark [85%] exceeded on [jFQih3_gQ8-MdAzbTwlHFg][data-5][/var/lib/elasticsearch/nodes/0] free: 638.4gb[13.7%], replicas will not be assigned to this node
[2020-10-04T23:59:43,874][INFO ][o.e.c.r.a.DiskThresholdMonitor] [master-2] low disk watermark [85%] exceeded on [_BUmAFaYTGi-6aaYnPlqvw][data-1][/var/lib/elasticsearch/nodes/0] free: 644.8gb[14.4%], replicas will not be assigned to this node



When I try to restart services by making similar changes to the nodes' elasticsearch.yml file, the services are not restarted.

    # /etc/elasticsearch/elasticsearch.yml
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 20gb ```

I believe those parameters need to be set through the APIs and not through the elasticsearch.yml file.

Thanks for the help. @Christian_Dahlqvist

Finally, what is your suggestion for these settings for my cluster? @Christian_Dahlqvist

Total Disk 42TB

cluster.routing.allocation.disk.threshold_enabled: true cluster.routing.allocation.disk.watermark.flood_stage: cluster.routing.allocation.disk.watermark.low: cluster.routing.allocation.disk.watermark.high:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.