We are trying to get the automatic deletion of marvel indices working on our monitoring cluster. We upgraded our elasticsearch, marvel to 2.3.1 and kibana to 4.5.0 using the basic license, thinking this was all we needed to do for the clean up to begin, but this was not the case.
We have a separate monitoring cluster and a production cluster.
For the indices to be automatically deleted do we need to install the marvel-agent on both the monitoring cluster as well as the production cluster?
Hi Con -
Our basic license defaults the marvel indices to 7 days of data and no option to extend that retention policy. Is this what you're observing?
and Yes, you need to install marvel-agent on both clusters.
- Install marvel-agent on secondary cluster
- Disable local collection marvel.agent.interval: -1
- Set marvel.history.duration
If you know how many days of marvel indices that you want to keep, it is as simple as configuring the following marvel.agent options in a node’s elasticsearch.yml file to control how Marvel data is collected from the node:
Sets the retention duration beyond which the indices created by Marvel will be automatically deleted. Defaults to 7 days. Set to -1 to disable automatic deletion of Marvel indices.
For more marvel configuration options, please visit
Hope this helps,
Elastic's free Curator tool is also fantastic for curating time-based indices (and many, many other things, like performing snapshots).
This would allow you to setup a cron job and delete whatever you want, including but not limited to just Marvel data.
Hope that helps,
Sorry to dig up an old post.
@bohyn we are facing a similar problem the logs don't seem to delete automatically. I have marvel indices older that 7 days. We are running Marvel with a basic license.