Monitoring-es indices more than 10g a day

Hello,

I have a question. I have several cluster running and everything works except one cluster.

The monitoring for elasticsearch is enabled with all default values, but my 3 node Cluster (7.1.0) uses 10g a day.
My other clusters (7.0.1) do only have 1g a day.

I did not find any breaking changes. Am I missing a new option?

@logger

Hmm, I can't think of anything offhand that would cause this. I assume you checked the logs on the cluster for errors? Could we see a sanitized copy of the settings for the 7.1 cluster?

No errors or warnings in the logs.

bootstrap.memory_lock: false
cluster.name: cluster
discovery.seed_hosts: ["host1","host2"]

http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: host2
path.data: F:\elastic\data
path.logs: F:\elastic\logs
transport.tcp.port: 9300
network.host: 0.0.0.0

xpack.watcher.enabled: true
xpack.monitoring.enabled: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: F:\elastic\config\certs\elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: F:\elastic\config\certs\elastic-certificates.p12

OK I forgot this one is the only Windows cluster. I think this info is important.

Ah, interesting. Do you have other 7.1 clusters which are behaving as-expected and it just the Windows cluster which is emitting data at a higher rate?

I have only the Windows Cluster with 7.1.0
the others are debian and version 7.0.1 I will try to upgrade one debian Cluster to 7.1.0 and see what will happen.

And it is only the monitoring-es ; logstash and kibana are behaving normal

Ok, looks like it handled itself.

This cluster was a 3 node test cluster on windows server 2016.
with way too much shards and indices. 1600 shards per node.

Now after a cleanup to 60 shards all is running well.
Does elasticsearch monitor other events and put it in the .monitoring-es index?

Hi @logger. Glad you got it working. Elasticsearch monitors a variety of metrics in the .monitoring-es index. Since Elasticsearch was collecting data on all those shards, that probably explains the increase in data usage.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.