I have the Marvel agent running on our production cluster sending metrics to a separate monitoring cluster, I also have the Marvel agent enabled on the monitoring cluster but with marvel.agent.interval set to -1 as I am using a basic license.
It was my understanding, according to this blog post: Automatic deletion of marvel indices not working, that Marvel should now be taking care of removing indices older than 7 days on the monitoring cluster, however there is still around 60+ days worth that aren't being cleared down.
I see the marvel cleaner (which I guessed is what should take care of the cleardown) starting, but seemingly no action is taken:
2016-10-28 08:19:39,452][DEBUG][marvel.cleaner ] [VM-e136c374-2aba-4d48-9efa-436445aea6ec] starting cleaning service
[2016-10-28 08:19:39,550][DEBUG][marvel.cleaner ] [VM-e136c374-2aba-4d48-9efa-436445aea6ec] cleaning service started
Marvel agent version is 2.3.3. Does anyone have any idea what might be misconfigured here?
Thanks for getting back to me. I identified the issue and I've now had success in Marvel clearing down its own indices,
I still had marvel.agent.exporters parameter set in the elasticsearch configuration (URL just pointing to the local cluster) which seemed to be preventing the cleardown from taking place. Having now removed that all the marvel clusters in question are now happily maintaining 7 days worth of indices.
Indeed! Only the default local exporter purges indices. The HTTP exporter assumes the possibility of a shared monitoring cluster and therefore does not purge indices in order to not have one cluster's needs stomping over the needs of another.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.