I have a 6.3.0 - 3 node cluster collecting logs with default number of shards and replicas.
Each node is associated with a 300GB external disk.
Is it by design that one node will have the sum of shards? I would have expected it to distribute equally among the nodes.
The issue is when the max shard node reaches the file limit, ES starts to mark the indices as read only and things start to go ugly.
With this approach how can I use all the 900GB available?
Is there a way to allocate shards equally?
Here's a screenshot of that setup.
I figured it out with the help of a colleague of mine.
Basically we had these running in AWS and since one of the node was in Zone B and the other two were in Zone A.
And since we had the below setting on,
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.