Have you used the _cat/shards endpoint? Some internal indices may have auto-expand replicas so I would not expect evertying to have exactly 1 replica.
Also, you seem to be using shrink on your lifecycle, so this number may be related to a shrink task running.
Show where? Is this from AutoOps? I've had some false positives with AutoOps.
On 8.6 the heuristic was changed to consider other things like disk space and also write load depending on the license, so I'm not sure if we should expect it to be equal anymore, even though this is what is mentioned on the documentation.
On my cluster I also do not have an equal number of shards, but they are pretty close.
Ah found a default APM index: .apm-source-map with 1 primary on node769 and 2 replicas on node765+node770
$ esapi -g '_cat/shards?h=index,shard,prirep,state,node' | sort | awk '/ r /{printf " - %s",$0} / p /{printf "\n%s",$0}'
.apm-agent-configuration 0 p STARTED node771 - .apm-agent-configuration 0 r STARTED node769
.apm-custom-link 0 p STARTED node769 - .apm-custom-link 0 r STARTED node771
.apm-source-map 0 p STARTED node769 - .apm-source-map 0 r STARTED node765 - .apm-source-map 0 r STARTED node770
Right, are using DC allocation routing, as we've got nodes spread across 3x DCs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.