Elasticsearch does not allocate shards to one node

Hi All,
I have a 3 node cluster of Elasticsearch. Somehow the system does not allocate any shards to node 2 (node 1 holds 34 and node 3 holds 31 shards).
I'm using ES 6.1.2.

elasticsearch.yml (from node 2)

cluster.name: abc-cluster
node.name: elasticsearch-node-2
path.data: /home/abc/elasticsearch/elasticsearch_data
path.logs: /home/abc/elasticsearch/elasticsearch_logs
network.host: site
discovery.zen.ping.unicast.hosts: ["elastic-01.abc.local", "elastic-03.abc.local"]
discovery.zen.minimum_master_nodes: 2
type: native
order: 1
type: custom
order: 0
xpack.ssl.keystore.path: certs/elastic-certificates.p12
xpack.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
path.repo: /home/abc/elasticsearch/elasticsearch_snapshots

What should I check?

One thing that can cause this is if the Elasticsearch version on the node not receiving any data is not exactly the same as the other nodes. That is what I would check first. If they are the same you should check logs and make sure the node has sufficient disk space.

I can see nothing relevant in the log file.
On the monitoring page of Kibana I can see there are 21.1 GB free space on node-2.
So, that is why I'm a bit confused.

Do all the nodes have the same amount of disk space allocated?

I have found the issue.
Node-2 and Node-3 are used for other purposes also - so plenty of disk space is used. I had to change the following params:
cluster.routing.allocation.disk.watermark.low: "5Gb"
cluster.routing.allocation.disk.watermark.high: "2Gb"
cluster.routing.allocation.disk.watermark.flood_stage: "1Gb"
It is working now!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.