I have a VM running ELK 5.5.2. The VM was deployed based off ELK Certified by Bitnami image in Azure with mostly default settings and no pluggins. It is configured with 1 cluster and 1 node. Recently I noticed that all indices older than 1 month (4 weeks?) are missing from kibana discover page. If I run GET /_cat/indices?v, I only get previous 4 weeks of indices. I restored to an older backup of the VM and they all only show previous 4 weeks of indices.
The VM does not have Curator installed and I don't see an cron jobs on the VM that would delete the indices. Anyone have any ideas of why I am missing older indices.
If I run GET /_cat/indices?v, I only get previous 4 weeks of indices. I restored to an older backup of the VM and they all only show previous 4 weeks of indices.
When you check the _cat api from the restored backup, are you also only seeing the last 4 weeks?
If the indices aren't there, Kibana can't show them. And based on what you said about the results of the _cat api, the indices aren't in Elasticsearch. Is sounds like something is removing them from ES. As for what might be happening, I couldn't say for sure, not being familiar with Azure or Bitnami myself. Maybe there's some kind of "cleanup" script that image includes that doesn't use cron?
Also, kind of an aside, but I don't think the Bitnami images are "certified" in any way...
The _cat api from restored backup VMs are also showing 4 weeks of indices. Since there is no cron jobs deleting the indices I just want to make sure that there is no config settings in elasticsearch would force/enable data retention. I will double check and see if I can find any non cron scripts that may be deleting the indices.
Regarding the "certified" Bitnami image. They call it "ELK Certified by Bitnami".
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.