I use ES 5.0.0 with several indices including .kibana. Because this is the smallest one, I picked it for the demonstration here, but a lot of other indices are also yellow. They are not yellow from the beginning, but after a while in production some of them change.
I have three master nodes and two data nodes in my environment. The nodes are reachable and the cluster has no special settings. There are also plenty of system resources left (memory, disk, cpu). Sometimes, it happens that one node maybe is disconnected for a few seconds due to network issues.
Index-output
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana cZwgRQO9SJa10m0p6aGa8Q 1 1 205 4 406.4kb 406.4kb
and the shards
index shard prirep state docs store ip node
.kibana 0 p STARTED 205 406.4kb 138.201.138.161 ZPJ6URQ
.kibana 0 r UNASSIGNED
As far is I can tell, there should be one primary shard and one replication of it. Why isn't this working for this .kibana shard? How can I investigate why this is happening.
This is now awkward, but I restarted the ES-Cluster and the errors are gone (for now). They should happen again in a few days. Then I update this issue with the output of _cat/nodes
As per split brain scenarios the node will not join the cluster upto restart and loose the data what ever in the restarted server. Please check once again for the spilt brain scenarios and for any loss of data in the cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.