Node not getting shards allocated back to it after upgrade

I tried my hand at a roll update for my cluster. I have 3 nodes (elastic01,elastic02 and elastic03). Elastic03 is currently the master so I started with 02 first. That upgrade went fine and cluster rebuilt no problem. I then went to 01. Its been a half hour side I restarted.

Any thoughts

Rather than deleting your question, it would be better if you could share your solution, as it may help others in the future with a similar problem :slight_smile:

The reason I withdrew the question is because it got worse. I thought I had updated one node and everything was going great. I was using the rolling upgrade/update instructions. However I ended up loosing the node that upgraded after I thought everything had stabilized. Through a series of unfortunate decisions on my part I'm now left with a 3 node cluster that will not accept new data and any time I create a new index. It shows up as red and you cannot write any documents to that index.

So I'm in a bit of a mess at the moment. I've never gotten the updates to work properly for me. I know this is something I either don't understand about the cluster or I'm missing a step.

If I do a _cluster/health I get Active_Primary_nodes: 0 and Active_shards: 0 but I have 3 data nodes.

Providing logs will be useful.
What did you upgrade from, and to?

Its fixed but here is what happened.

I did a curl -XGET 10.200.100.101:9200/_cluster/allocation/explain?pretty

After reading through that I found out that all my nodes had "cluster.routing.allocation.enable": none. So yeah that was the problem. I set "cluster.routing.allocation.enable": "all" and everything came back.

So in summary after I went fro 7.0 to 7.1.1 on 02 and it looked liked everything balanced back out and was working. Then 02 disappeared. In the panic of having a node go missing I set them to none and then proceeded to make a lot of bad decisions by deleting indexes. Should have stopped in my tracks and troubleshot the node and not get all freaked out.

Completely my fault.

Hey, that's ok! And it's super appreciated that you shared this info as it'll help someone in future.