Monitoring Data Deleted After 7 Days

I've recently upgraded to ES 7.13.x from 6.8. I currently have two clusters a production cluster and a monitoring cluster. The production cluster sends monitoring data to the monitoring cluster via HTTP exporter. On version 6.8 This data was deleted every 60 days. however after upgrading the data is being deleted after only 7 days. I am running a basic license and see that the default for xpack.monitoring.history.duration is 7 days, but there is a note that this value only affects data provided via local exporter. I've checked that I don't have any ILMs affecting these indices. Any insight into this issue would be greatly appreciated!

It sounds like the monitoring cluster may be monitoring itself using the local exporter.

In the dedicated monitoring cluster, check to see if you have these settings:

# my monitoring data should only contain the production cluster has exported using http exporter
# make sure the monitoring cluster is not monitoring itself
xpack.monitoring.collection.enabled: false

 # use the cleaning service to remove data from the production cluster older than 60d
xpack.monitoring.history.duration: 60d

I think this concept is explained here: Local exporters | Elasticsearch Guide [7.13] | Elastic

That is the same conclusion I have reached. Although I have had xpack.monitoring.collection.enabled set to false for the monitoring cluster since before the upgrade from 6.8 to 7.13.

When the cluster first restarted I had an issue with the monitoring cluster receiving monitoring data from both the monitoring and production cluster. which was an issue because we are running on a basic license. After a few hours and a few restarts of the monitoring cluster it stopped reporting to itself and recognized that it was meant to be monitoring the production cluster.

I've also made a similar change to the history duration to see if that would help, but have to wait for the new indices with a higher duration to near their expiration (treating the change as an Index Template ILM change).

Is there anywhere I would be able to dig into the monitoring indices to see what exporter is creating it or where I would be able to see when and why the cluster is being cleaned up?

The xpack.monitoring.history.duration documentation states it only affects local export monitoring logs, for some reason it is deleting the logs sent from my production cluster. Am I doing something wrong with my exporters that needs them to be distinguished from the monitoring cluster's local export, that's supposed to be disabled?

on my kibana cluster I have both xpack.monitoring.collection.enabled and xpack.monitoring.elasticsearch.collection.enabled set to false.

I was able to find these logs. It seems that the local exporter is still running even though xpack.monitoring.collection.enabled is set to false. Does anyone know a way to disable the local exporter?

[2021-08-29T18:59:38,666][INFO ][o.e.c.m.MetadataCreateIndexService] [{server name}] [.monitoring-es-7-2021.08.30] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0]
[2021-08-29T20:00:00,001][INFO ][o.e.x.m.e.l.LocalExporter] [{server name}] cleaning up [1] old indices
[2021-08-29T20:00:00,002][INFO ][o.e.c.m.MetadataDeleteIndexService] [{server name}] [.monitoring-es-7-2021.08.23/yPGqYCebTYWsvWmid4PwvQ] deleting index

Issue potentially fixed. From the logs above the local exporter seemed to be running on my monitoring cluster even though the cluster was configured to not collect any data. I changed the elasticsearch.yml to explicitly use a HTTP exporter and it seems to have worked. I have 8 days worth of monitoring data as opposed to the limit of 7 days with the local exporter.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.