Hello, I wanted to know if it is possible to restart a node of an Elasticsearch cluster in a graceful way?
When I restart a node in the cluster, searches that use that node fail (shards are replicated).
Is it possible to blacklist a node for searches before restarting it?
Currently the only solution I have found: re-allocate the shards of the node to other nodes. But I can't do this because my nodes have several terabytes of storage.
Hello David and thank you for your response. I tried to do the trick (without upgrade ) but it seams that you need to do the request on a coordinatating node to do a graceful restart on the other nodes. Am i right ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.