Hi,
we are currently working with a small 3 node (8 CPUs + 20G RAM) cluster. We have an index with 16 millions documents (3 primary shards with 1 replica, 10s refresh interval) that are updated at least once a day (might be much more). Sometimes these documents are updated all at once within a small time window. What we see in this case is that the index size on the disk grows from 16G to 33G. The segment count from 75 to 140. We did not change anything in the merge or translog configuration.
So here is my question. Is it normal to see such a variation in disk space? Is there a way to optimize this variation?
Thanks
Kind regards
Manfred
Elasticsearch 7.3.1 with xpack