Elastic in kubernetes - how to gracefully shutdown

5 data nodes, 5 ingest nodes and a master (+ kibana) running in kubernetes. Looking for a way to tell elastic to close indexes and put away all of its stuff so I can work on k8.

If I just kill it outright when it gets restarted there are dozens of "dangling indexes" and kibana is super unhappy.

The usual way to get dangling indices is if your master nodes are not using persistent storage, so the cluster state is lost each time the master node(s) restart. Dangling indices are a sign of potential data loss, and you should avoid them at all costs.

How should I calculate the space requirement(s) for the master node data volume?

Just measure it. It depends on how large and complex your cluster is, but it's normally less than 1GB.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.