Nearly 50% shards marked as 'unassigned' after cluster restart

Working cluster running in Kubernetes. This morning I updated k8 + docker and this meant restarting the master node(s) which meant restarting everything. (thread) This is a familiar process - we've done it pretty regularly (for various reasons) without any real problem.

Today was different for some reason. Of 3560 shards 1686 are marked as 'unassigned'. Ive always had 1 or more replicas for each shard in hopes of avoiding lost data when k8 is fussy.

Is there any way to convince elastic to pick up these pieces? Or am I SOL?

For future reference - to fix this I removed the persistent storage on the master node so it would start with an empty volume. Then restarted everything and let elastic figure itself out.

Problem solved.

Master nodes need persistent storage. Not having it will can cause a lot of pain and data loss later on.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.