Strange size difference in shards since upgrading to 6.2

Hi,
We recently upgraded from 6.1 to 6.2 and we're noticing that some shards are getting disproportionately big.

Take these two, shard 0 from the same index, different nodes. 16gig on one, 6 gig on the other?

|metrics20-2018.03.30|0|r|STARTED|4698496|16.1gb|10.187.21.3|elasticsearch-data-hot-5|
|metrics20-2018.03.30|0|p|STARTED|4663657|5.9gb|10.187.9.3|elasticsearch-data-hot-1|

Heres another example:

|metrics20-2018.03.30|11|r|STARTED|4696819|17.2gb|10.187.8.3|elasticsearch-data-hot-4|
|metrics20-2018.03.30|11|p|STARTED|4671390|6gb|10.187.9.3|elasticsearch-data-hot-1|

This is causing our disk to fill up on certain nodes far too fast and we have no idea why it is happening.

Any ideas?

1 Like

I've noticed a correlation that this is purely on replicas.
This index is a 20 shard, 1 replica setup.

metrics20-2018.03.30
size: 154Gi (535Gi)
docs: 138,337,371 (269,919,578)

Notice how the total size (535) is 3.5x the primary size (154)?

We have had to reduce the replicas from 1 to 0 for now. You can see the impact that had on disk usage.

If you set the number of replicas back to 1, are the rebuilt replicas also many times larger than the primary?

If you see this again could you provide the output of the following?

GET /<INDEX>/_stats?level=shards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.