Reboot Hot nodes machine on elasticsearch cluster

Hello Team

We have docker based ELK setup with 17 nodes among which 4 nodes are hot nodes. Docker containers for all 4 hot nodes are on the same machine. The machine also contains one hyper node and one kibana docker container.

Due to some issues, we need to reboot the machine. So what are the things to take care to reboot the machine. As all 4 hot nodes are on the same machine, no hot nodes will be available during reboot. Will there be any effect of this on shard allocation ?

Please suggest the correct approach. Any help is appreciated.


That will likely cause issues, as you will be restarting all your nodes that are doing the ingestion of data, and then having to recover all of the indices and shards that they host.

If you have to restart all 4 of them, then the best option will be to disable allocation and any ingestion, then restart, wait for things to return to green, then restart ingestion.

In addition to the prior reply, if your senders buffer data, like filebeat or logstash with persistent queues, if your buffers are large enough to hold the data during the reboot (and cluster recovery), you shouldn't miss events.

Things like syslog without a queuing buffer will probably miss events.

Thanks @warkolm and @rugenl for suggestions. Fortunately we need not to restart the machine.

Just a thought, to avoid such scenario, would it be a good idea to keep different hot nodes on different machines within a cluster ?



Thanks @warkolm.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.