We recently had an elasticsearch cluster that we were moving to a new environment. Not certain why, but at some point half of our nodes in an old environment shut off. this caused a bunch of shards to go unassigned, which I didnt figure was a problem. However, I'd like the current shards to just distribute themselves amongst the REMAINING nodes, as we are decomissioning the old ones.
I have been trying to use cluster allocation settings and rebalance settings, and for some reason, the shards STILL won't assign. Here are what my cluster looks like currently, and my cluster settings look like currently:
I am considering using the reroute api in a loop across the other shards, but only really as a last resort. Are there any other ways I can possibly get these to re-allocate, I cannot figure out why they seem to be stuck.
Your cluster might be RED because you have 2 shards relocating and they may be primaries that are currently unassigned but in the process of getting assigned. Are these large shards?
How many indices do you have and what is your number of replicas setting for each?
Lastly, that's a lot of shards to distribute over just 5 nodes. Is it possible your nodes are dying from out of memory errors or something similar causing the cluster chaos?
We have 3282 indices, SOME of which range up to 70g in size, but the vast majority are under a gig. It looks like 200 of them are over 1g, and only 28 of those are over 10g. We have 5 shards per index with 2 replicas. The servers are not MASSIVE, but when I check the memory and cpu on the machines it seems reasonable, 1.5g memory free for use, so I dont think that that is the problem....
Oh it seems that number was on the master, not the slaves, one of those is hanging at 223M free memory... I am going to try adding another node to see if that helps to alleviate the load.
Yeah, in general, thats a pretty high number of shards for just 5 data nodes, prefer less shards with more data (up to a limit). Is your one node with 223M free mem doing a lot of GC?
Since earlier I have added two more nodes to the cluster and it has had basically no effect.... The memory on the slaves ranges from 500mb to 3G, so I no longer think its a memory problem...
Is there info in the logs that might give some clues as to whats happening? Those around shards not being allocated or rerouting or anything to do with recovery?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.