Is this forcemerge behaviour normal?

Hello, just wanted to ask a quick question regarding forcemerge.
We have a hot-warm architecture for logs and every night we move yesterday indices to our "warm" node. We only have 1 master (hot), 1 data (warm) and one coordinator with kibana.
Then, once the "old" indices are in our warm node, we do a forcemerge down to 1 segment with curator.

Now here comes the doubt: an index is usually around 50GB size, but during the forcemerge it goes up to 150GB or more. And it takes a really long time too (3 hours or so), should I worry about this?

I remember reading that an index being forcemerged should be readonly to prevent big segments from being created, but we are using logstash and the indices are one day old so this should not be happening.

Let me know what you think.

We are using ElasticSearch v6.5.2
No replicas

Thank you!

Once the forcemerge is done, does the disk space end up to 50gb or less?
I mean is that just a temporary situation (which is expected IMO)?

Oh yes, it is temporary. It goes back to ~50GB.

And it takes a really long time too (3 hours or so), should I worry about this?

I believe that warm nodes are not using SSD drives but spinning disks? May be that's the main reason?

All nodes have SSD drives but we are having trouble with I/O latency. That's why I wanted to know if the x3 increase in index size during forcemerge was normal or not. If it is not, then I will focus on that. If it is normal then I will focus on investigating and reducing IO wait times.

@jpountz WDYT?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.