Hello,
I ran into an issue this morning after leaving a test running over the weekend. The test generated more data then available space in my elasticsearch data partition. I am running 7.7.1 on a single node in a docker container with no replicas. My data directory is bind mounted so the data survives container restarts.
When I tried to view my indexes in Kibana so that I could delete some, it just threw errors and would not display any index.
I manually tried to delete an index using curl:
[root@d234bc67034e elasticsearch]# curl -XDELETE "http://localhost:9200/filebeat-2020-11-23"
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
I tried to restart elasticsearch and that made things worse because the container refused to even attempt a start. So, at this point elasticsearch isn't running.
I also looked at the directories to see if I could delete indexes with rm. The disk names of indexes are less then helpful:
$ ls -l elastic-data/elasticsearch/nodes/0/indices
total 48
drwxrwxr-x 4 user group 4096 Nov 23 17:27 0qYQLn1zQrGADve76AnL4w
drwxrwxr-x 4 user group 4096 Nov 23 17:27 0UlEcuzmTHOBbeDSx0MIXg
drwxrwxr-x 4 user group 4096 Nov 23 17:27 8zM11IVTT56t1wWsG2lcuw
Searching online for how to manually remove indexes only gave results for using the API in one form or another, which is not helpful for a scenario where elasticsearch is either not responding properly or not running at all.
I ended up completely deleting the contents of the the elastic-data partition and starting over. While elasticsearch is up and running again, I lost all my data from over the weekend.
My question then, is there a way to delete indexes when elasticsearch is not running? This way I could have freed up some space and got things going again without losing all my data.
Thank you.