Observed highly unequal shard sizes within an index.
Not using any custom routing.
elastic search version: 6.2.2
operating system: Centos 7.3.1611
CPU per host: 8
Ram: 64 gb
JVM committed : 29 GB
Number of nodes : 10
Daily creation of indices
Rak awareness enabled across two zones.
No dedicated master, coordinating node.
Was following this Unbalanced shards within index. My situation looks similar and also the translog sizes of overlarge indices is very large.
But the shocking part for me was the very huge difference in overlarge shard sizes nearly 400 -500 gb for almost same number of document counts. One of the node also reached disk watermark causing index to go in read only mode. Moreover moving to 6.3.0 is the solution for this issue? Or do we need to debug further.