Disk usage grows indefinitely over time

as you see in the screenshot above the disk usage of my index grows indefinitely over time. If I close the index and then reopen it the usage drops, as you can see from the graph. I did a _close followed by a _open at 00, 07 and 16.

My application does a lot of replaces, but not many inserts. I except to see the disk usage grow a little, but the merging should inhibit it from growing too much.
I checked the _cat/shards and _cat/segments api and it seems to me that the segments are correctly merged.

I don't understand why a close of the index should free disk space, but is the only method I have found to stop elastic from saturating all the available disk space. (also restarting the nodes work)

This phenomenon started after upgrading from elastic7 to elastic 8.11.1
I have tried to create a new index from scratch (using api _reindex) but it persist.
Do you have any suggestion?


Hi Tommaso,

You can find indices with a high deleted documents count. You can do this by running the following command:

GET _cat/indices?v&h=index,docs.count,docs.deleted&s=dd:desc

Then you can run a force merge with only_expunge_deletes=true on indices with a high deleted documents count. This will remove the deleted documents from your index and potentially free up some disk space.


Thanks for the suggestion, I tried the forcemerge but it did not help.

In the end it seems that the indexes were fine. The servers where elasticsearch runs were not updated with the latest patches.
We did an upgrade of the linux kernel (same revision but different patch release) and now it is working fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.