I have set up a demo elasticsearch cluster (with 3 nodes) in ec2 to build a prototype. I run an indexer job on demand to populate documents into the index. I am querying the index using sense plugin and a basic javascript UI.
For the last couple of days all the indices in my cluster gets wiped out completely around noon time. I checked multiple times, i dont have any delete all command (or any delete command) any where in my code or sense plugin. I suspected the ec2 instance got rebooted, but "Launch time" still shows the time when i originally launched the cluster. Moreover all the elasticsearch log files are there. So far, It doesnt look like a ec2 issue. I also checked for the list of plugins and i only have cloud-aws installed.
I enabled debug logging last time it occurred and this is what i see in the logs now during the time in occurred
node-1
[2017-01-13 19:58:34,640][DEBUG][cluster.service ] [Blue Diamond] processing [delete-index [command.php, index-1, index-2, warning]]: execute
Does it ring a bell to anyone ? This has happened consecutively for the last 3 days around the same time. does it ring a bell to anyone ? Any tips would be appreciated.
I did not set action.destructive_requires_name: true as it was not a production cluster. Now I am going to set it and see if that prevents.
Note: I am using elasticsearch version 2.4.3