Marvel Indices Not Clearing Down

monitoring

(Michael Eves) #1

Hi,

I have the Marvel agent running on our production cluster sending metrics to a separate monitoring cluster, I also have the Marvel agent enabled on the monitoring cluster but with marvel.agent.interval set to -1 as I am using a basic license.

It was my understanding, according to this blog post: Automatic deletion of marvel indices not working, that Marvel should now be taking care of removing indices older than 7 days on the monitoring cluster, however there is still around 60+ days worth that aren't being cleared down.

I see the marvel cleaner (which I guessed is what should take care of the cleardown) starting, but seemingly no action is taken:

2016-10-28 08:19:39,452][DEBUG][marvel.cleaner ] [VM-e136c374-2aba-4d48-9efa-436445aea6ec] starting cleaning service
[2016-10-28 08:19:39,550][DEBUG][marvel.cleaner ] [VM-e136c374-2aba-4d48-9efa-436445aea6ec] cleaning service started

Marvel agent version is 2.3.3. Does anyone have any idea what might be misconfigured here?


(Chris Earle) #2

Hi Michael,

Could I see the output of the following commands on the Monitoring Cluster?

curl -XGET localhost:9200/_cat/plugins?v

and

curl -XGET 'localhost:9200/_nodes?filter_path=nodes.*.settings&pretty'

Let me know,
Chris


(Michael Eves) #3

Hi Chris,

Thanks for getting back to me. I identified the issue and I've now had success in Marvel clearing down its own indices,

I still had marvel.agent.exporters parameter set in the elasticsearch configuration (URL just pointing to the local cluster) which seemed to be preventing the cleardown from taking place. Having now removed that all the marvel clusters in question are now happily maintaining 7 days worth of indices.

Cheers,
Mike


(Chris Earle) #4

Hi Michael,

Indeed! Only the default local exporter purges indices. The HTTP exporter assumes the possibility of a shared monitoring cluster and therefore does not purge indices in order to not have one cluster's needs stomping over the needs of another.

Thanks,
Chris


(system) #5