After creating a new index with index.number_of_shards=2 and
index.number_of_replicas=1 on a two node cluster everything is "well
balanced" which means each node has one primary shard and one replica.
Then node_2 is going down, and both shards on node_1 become primary.
After that node_2 comes up again but both shards on node_1 stay on primary.
Not really missing something, the fact that two primaries exist on a node and not balanced as in one primary per node does not really affect who things work since replication is sync (by default) and replicas also serve searches. Yet, there might be cases where you would want to make sure primaries are evened out (specifically in the future, when snapshotting/backup of local data is implemented), and then we will allow for at least an API to allow to even it out.
After creating a new index with index.number_of_shards=2 and index.number_of_replicas=1 on a two node cluster everything is "well balanced" which means each node has one primary shard and one replica.
Then node_2 is going down, and both shards on node_1 become primary.
After that node_2 comes up again but both shards on node_1 stay on primary.
Actually, if the systems are in state STARTED and all is "green", only the
routing has to be switched (primary: true/false) between shards, or is this
blue-eyed from my point of view?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.