Disk space is 100% after running a "delete by query" in devtool in kibana

After running the query below, server space is getting full in all data nodes ( ELK cluster: 3 masters, 3 data, 1 kibana node).

POST /apic_sandbox/_delete_by_query?wait_for_completion=false //change index here accordingly
{
"query": {
"bool": {
"must": [
{
"match_phrase": {
"catalog_name": "sandbox" //change catalog name accordingly
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "2022-07-15T18:00:00+05:30", //change the timestamp for which you need to delete
"lt": "2022-07-15T19:00:00+05:30"
}
}
}
]
}
kindly help!

I don't think that the query cause the problem.

There is a useful command for top 10 largest directories:
du -h / | sort -rh | head -10
In few iterations, for instance 1. du -h /var 2. du -h /var/logs will lead you what to clean.

If you don't need, remove/move old ES logs, then if is possible clean /tmp directory, then other unused directories. Be aware, if you delete for instance 100 MB data will have immediately -100 MB on the disk and Lucene has own internal mechanism for releasing free disk space.

Thanks! @Rios for reply

I think issues are with tasks or processes which are not releasing disk. These processes start to resume when I start service.
After adding another disk to the cluster it starts consuming that disk space also.

You have multiple disks on a node?

disk added to LVM

how to stop the _forcemerge if it is still in running condition

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.