Indices get archived despite running Delete Scripts


I wrote a cron job that runs a delete script that in turn deletes one index per day (to ensure that there are only 8 indices at any point of time). I do this coz we have space constraints on our Development server.

Of late, I observed that despite having just eight indices, the cluster health turns yellow and sometimes red due to space issues.

A little more research revealed that there's a path called /logdata/elasticsearch-dev/nodes/.
When I cd to this location, I see this:
0 1 2 3 4 5 6 7 8 9 10 11.

And another cd to each of these 'nodes', gives me this structure:
indices node.lock

The directory 'indices' holds old indices (which I thought were no longer existent coz I deleted them manually). Similarly, other nodes also have the directory 'indices' storing stray indices that consume space.

Could any of you explain this concept of "nodes" holding "old indices" and my Delete script not having an effect on this?


You probably ran more than one node on this machine (not recommended in prod).

Try to check how many nodes run locally.

If you stop all nodes and restart only one then you can remove all dirs but 0

Is it safe to manually delete those archived indices?

If your single running node is the node 0 and that you have in it all indices, then you are safe.

You could eventually start 12 instances on your machine and let all nodes synchronize. Then switch them one by one and wait for relocation.
But if you are running out of disk that might not be a good idea.