I have an index with ~ 100k doc.
Every 5 minutes, I add in this index ~ 50k docs and delete the same number.
This is not that much, however, each time, the disk space increase by ~ 15Mo, so after 12 hours the disk space is 2Go !
I ready many posts, the more relevants being this one or that one, however the _forcemerge +max_num_segments request seems to have no effect on the diskspace, not a single Mo released ...
I have the same pb on different ES versions: 6.3.0, 6.3.2, and 6.4
And in different environments (staging: only one node. production: one cluster, 3 nodes, each index on 5 shards + 1 rep)
if you stop writing in abbreviations it would be much easier to rd (bad pun intended).
can you share the output of GET yourindex/_stats here to check how many documents are marked as deleted? Can you share the output before and after running forcemerge?
Also, can you explain what the issue is with needing more space? What are you afraid of? running out of disk space?
Also the forcemerge docs contain a statement to only run against read only indices, which is not the case here.
One last hint, there are a few settings you can tune in order to have more aggressive merging going to due to the number of deletes in a segment, but they are not exposed in the documentation as they are considered expert settings. You can find them here
On the disk, this index takes 2.1 Go (and 18 hours ago, the same index was taking only ~ 10Mo !)
And yes I am afraid of running out of disk space, that's what is happening every day if I do not a full reset of the index (I delete the index, then recreate it, but that won't be possible once the site in production).
Thanks also for the forcemerge advanced options, I will read and try some of them.
Make sure you have the refresh interval set to a actual duration in your index settings. It can somethings be useful to set this to -1 in order to disable it during heavy indexing, but you then need to reset it afterwards (or run a manual refresh). In newer versions the transaction logs is also kept for a while in order to make recovery more efficient. You can reduce the duration and size using these parameters, which may also reduce disk usage.
Indeed, you are right, I did not notice that this was the transaction log that took so much disp space.
I will try to play with the refresh interval plus the translog parameters and tell you if it works
thanks !
And I noticed the "uncommited_size_in_bytes" which is not null, I was wondering if that could be the reason why the translog is not cleaned ?
However, in the translog documentation you pointed to me, I found a solution, by setting index.translog.retention.age to 1 hour, my folder size has decreased from 1Go to 100Mo !
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.