Indices Keep Returning

Hello. I'm extremely new to managing an elastic stack, so it is not something I am familiar with. The implementation we have right now is encountering issues, and after trouble shooting for some time I believe it is due the amount of data being indexed. I have attempted to delete the indicese from the past few months, and they show as deleted for a small amount of time. However, they come back as red and unallocated. I want them permanently gone. I have been using the dev console and also tried with curl commands as below:

curl -XDELETE http://elastic-url/common-index-name*

Any help would be greatly appreciated.

Which version are you using? How many nodes do you have in the cluster? How are these configured?

Version is 6.2.4. There are 6 elastic nodes, distributed across three VM's. Docker is being used for all elasticsearch programs.

Have you set minimum_master_nodes correctly to prevent split brain scenarios?

When I query with "GET /_cluster/settings" there is no field for the minium master nodes present. Is that the correct spot to check? From what I've just read I believe I should have 4.

What is in your elasticsearch.yml files?

I looked in each .yml file, none of them set a minimum master node number. They specifice that one node is a master (node.master=true) and the other is not (node.master=false, node.data=true). Additionally the containers hosting the nodes are extremely unresponsive, not allowing me to pull logs or console into them.

How many nodes are master eligible in total?

Three of them have node.master=true. I also found a .yml file within the containers. It does have the minimum master nodes field and it is set to 1 on all of them.

If you have 3 master eligible nodes it has to be set to 2.

After deleting, try executing the "only_expunge_deletes=true" command, as described at:

This has worked for me in the past.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.