Then, I deleted all installation files, unzipped it again and restarted my cluster.
When I $ curl http://<node>:9200/_cat/indices I get the following answer:
red open .security-6 t7GWpZYGTROvIVeyRmmF2A 1 1
red open events 3kjZqpKpQFezGC2EClGEUg 5 1 0 0 1kb 522b
red open .kibana_task_manager sd3MhsOzTz6rlDxpDMXK8w 1 1
red open metrics pvUAl-YIR1WRVmmpL2zpmA 5 1 0 0 1kb 522b
red open .kibana_1 hnL2EriETpCugzmLVuvS4w 1 1
It seems that events and metrics indices were created again!
Why is this happening? Do I have to delete any other directory to do a fresh install?
I'm not sure that this is the cause, but the logs will tell us for sure. On the elected master node, looking back to when it first started up in the new cluster, you should see a message saying:
recovered [xx] indices into cluster_state
If the number xx is not zero then the cluster did not start up empty. If it is zero, but there are other messages containing the string DanglingIndicesState then, again, the cluster did not start up empty. If neither of the above then there should be later messages containing the string creating index, indicating that the indices are being re-created later on.
I solved it by manually deleting those indices via curl.
As you said, that log message appeared in my console. But my question is, why? since I wiped all my data dirs, and reinstalled ES, is this information stored anywhere on the server?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.