Unbalanced disk usage with ES 6.1.3

I have a 7 node cluster which was recently upgraded from 5.5 to 5.6 and then to 6.1.3. For some reason 3 (a,b,c) of the nodes have hundreds of gb of disk space being taken up and I can't find out why. This is causing the shards to not be balanced correctly.

All the nodes are running Windows 2012 r2 and the indexes are on a dedicated drive.

Does anyone know of a way to find out what is using the space?

My thought was to pull one of the A B C nodes out of the cluster, let the cluster rebalance, delete all the index data from that node and then add it back into the cluster. My hope is that data would then be reallocated back the node in the proper amt. Does this make sense? Any other ideas?

This question is similar to

but he solved it by adding two new nodes. I may not have the luxury of being able to do this in this circumstance.

Those 3 nodes appear to be running out of of disk, so I'd assume you've hit the default watermarks and Elasticsearch will no longer route shards or allocate primaries to those nodes. Check your master node logs to see if that's the case.

Yes that is correct. Those nodes are hitting the watermarks. The problem is that they are taking up 750 GB of space but the ES indexes are only 260 GB of that. So my issue is what is taking up the 500 GB of space and how to clear it up so that the nodes are no longer hitting the disk watermark.

We ended up taking the nodes out of the cluster, letting the cluster rebalance and then deleting everything in the ES data directory. When the node was added back into the cluster data was rebalanced correctly.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.