I am running ES 2.1.2, Logstash 2.2. My cluster is 1 query node, 3 master node, 10 data nodes. I have 16GB allocated to ES. They are all dedicated nodes. My shard allocation is 5, with 1 replica. Data per index is about 450GB including the replica. Today I added repository nfs mount to my servers, and had to upgrade path.repo parameter which would require ES restart. ES restart every time is causing re-initialization of the shards. Any pointers what setting in my config may cause this issue?
So I setup delayed allocation and ran the synced flush. Indecies that are not written to once the node comes back are handled as they need to. But the shards that are on presently written index, do go through initialization. So in the cases when you need a full cluster restart to enable a setting, in my case path.repo how do you handle it?
Maybe introducing caching in front of logstash will help, as I will process everything and turn off logstash and let things to collect at caching layer, until I turn things on again.
shard it'self is 100% but translog isn't at this point can ignore this? Cluster because of this is still at yellow, but all the shard icons are showing green.
Wanted to find out, should I run this sync flush everyday to make sure in case if node fails out of cluster for whatever reason or I plan to to bring down the node, I will be covered? Also, I have 3 different kind of indecies on the cluster, will sync flush apply to each index separately right, if I issue sync flush for all
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.