I'm currently running experiments with ECK and a 3-nodes Elasticsearch cluster with 2 data/master and one master only.
The idea is that the data nodes are each on their own different storage system (this is a bare metal k8s) and I use 1 replica for all my indices.
Let's say I have to do maintenance on one of the storages, I noticed that if I just forcibly take out the storage with no action, reads can continue normally but all writes will error (node is still marked online) unless I force delete the pod that can no longer write using kubectl (writing can resume if I do as the ES node is then marked offline).
Is there a way to temporarily remove one of my data nodes so that I can take down its storage for a while? Using kubectl delete obviously causes a restart of the pod.
I thought of just editing the deployment (the Elasticsearch k8s resource) and removing the node from it. But if I re-add it will it find all its previous storage and be cool with it?
Thanks for your time,