Force-deleting index that is being snapshotted

Our system is using Elastic Cloud service to host Elasticsearch cluster. A few days ago, to identify the root cause of an issue with our logging system, Logstash log-level was switched to DEBUG. That has resulted in 52GB of data created on a single day and Elasticsearch got stuck with one of the instances ran out of disk. In kopf, we managed to delete some old data and get the node into yellow zone (disk use 86%), but the main "troublemaking" index remains intact. Every attempt to delete it or even close it gets denied: https://gist.github.com/vpavlushkov/3958deb2056c0bfca35d59c83ccd4303 Now, not all Elasticsearch functionalities are available; for example, monitoring data cannot get through.

Is there any way to get rid of that index file so that the system returns into normal, even though we'd loose 1 day of data? Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.