Working cluster running in Kubernetes. This morning I updated k8 + docker and this meant restarting the master node(s) which meant restarting everything. (thread) This is a familiar process - we've done it pretty regularly (for various reasons) without any real problem.
Today was different for some reason. Of 3560 shards 1686 are marked as 'unassigned'. Ive always had 1 or more replicas for each shard in hopes of avoiding lost data when k8 is fussy.
Is there any way to convince elastic to pick up these pieces? Or am I SOL?