I have a system set up with 12 nodes, 48 indexes, and 12 shards/index (I
know that number is way too high).
Anyways, I'm in the process of indexing all the data onto a new index with
far less shards, but I wanted to keep the initial cluster up for existing
users searching the data. Well, one of the nodes ran out of memory and
locked the system up, so I did a /etc/init.d/elasticsearch restart to each
node. When things came back up, none of the nodes would assigned. There's
absolutely nothing in the log file. It just looks like a normal start-up.
Any idea?
I have a system set up with 12 nodes, 48 indexes, and 12 shards/index (I
know that number is way too high).
Anyways, I'm in the process of indexing all the data onto a new index with
far less shards, but I wanted to keep the initial cluster up for existing
users searching the data. Well, one of the nodes ran out of memory and
locked the system up, so I did a /etc/init.d/elasticsearch restart to each
node. When things came back up, none of the nodes would assigned. There's
absolutely nothing in the log file. It just looks like a normal start-up.
Any idea?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.