Running 1.7.1 and I've not found a good answer to how the proper approach to change the ES recovery settings (expected, recover after, etc.) when growing/shrinking an ES cluster.
If I have 10 nodes and add another 10 to make 20, obviously those settings should be updated on each of the existing 10. Is there a better way to apply these changes without an outage than a rolling restart?
The docs refer to these not being dynamic settings. While I understand that, I'm a bit confused how people manage adding/removing nodes from large clusters (hundreds of nodes) without painful large rolling restarts. " These settings can only be set in the config/elasticsearch.yml file or on the command line (they are not dynamically updatable) and they are only relevant during a full cluster restart."
Generally I've just set them in the files and let elasticsearch pick them up whenever I actually need to restart it. I've always thought of these settings as a way to keep the cluster calm during a full cluster restart. So long as the files are up to date the nodes will pick up the right setting at startup which is when you need it anyway.
-Now assume we add 10 additional nodes. As long as everything is running as intended, there are no issues. If however say 5 nodes all go down (poor planning, they're all on the same VM host that was taken down), the cluster would still have 15 total nodes which is more than enough to kick off the recovery logic. But what if that's not the intention - in this 20 node cluster, we intend the recover_after_nodes to be 18 and the expected_nodes to be 20 - there doesn't seem to be a way to apply this without a cluster restart.
-But let's also go the other direction. We have a 20 node cluster with the intended recovery settings.
gateway.recover_after_nodes: 18
gateway.recover_after_time: 5m
gateway.expected_nodes: 20
-If our workload decreases to where we think we can get the same job done with 10 nodes, how would we adjust the recovery settings going the other direction? Once the cluster has only 10 nodes, we'll never meet the required 18 or 20 nodes the recovery settings require.
I suppose it feels odd for this to be a node level setting - my gut tells me this is better suited at the master level but that doesn't appear to be the case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.