Old .monitoring indices aren't being deleted

I'm having a similar issue as this topic where old monitoring indices aren't being deleted:

But we are using exporters of type local as far as I know because the monitoring indices are on the same nodes as our other indices. We are using elasticsearch 6 and a basic license.

We are actually running elasticsearch in many different env and this issue is only happening in one of those environments. In other environments the monitoring indices are being deleted after 7 days.

I feel like this elasticsearch cluster in this env got into a weird state somehow since its working fine in all the other environments. Is there something I can check/verify to get this working again? I know I can simply setup curator to run via a cron job because we are already doing this for non-monitoring indices.

I updated the title to reflect that this is exclusive to the .monitoring-* indices.

I don't have any clear insights to why they might not be deleting automatically, unless the settings that enable/disable this have been configured, or a longer-than-7 day retention period is configured.

1 Like

I'm pretty sure with a basic license you can't configure these settings anyways?

In our elasticsearch.yml we don't have any monitoring related settings set.

I'm not sure what's happened, then.

Okay I just found another piece of the puzzle. For some reason we are getting a license expired (I'm assuming some x-pack thing?) in the environment and not on the others. But a key line of the license expired elasticsearch logs output is:

The agent will stop automatically cleaning indices older than [xpack.monitoring.history.duration]

So that is probably what is causing the .monitoring indices to not be deleted. I'll look into the license expired issue and see what I can figure out.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.