Working cluster running in Kubernetes. This morning I updated k8 + docker and this meant restarting the master node(s) which meant restarting everything. (thread) This is a familiar process - we've done it pretty regularly (for various reasons) without any real problem.
Today was different for some reason. Of 3560 shards 1686 are marked as 'unassigned'. Ive always had 1 or more replicas for each shard in hopes of avoiding lost data when k8 is fussy.
Is there any way to convince elastic to pick up these pieces? Or am I SOL?
For future reference - to fix this I removed the persistent storage on the master node so it would start with an empty volume. Then restarted everything and let elastic figure itself out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.