Why .monitoring-* indices has large size of data

I have 11 elasticsearch nodes 3 master node 6 data node and 2 coordinate node.We are running latest version of elasticsearch 7.13.2
we have installed metricbeat and configured in all elasticsearch node we are monitoring our ELK stack and we have observed that .monitoring-es-* indices has 200gb ,100,150gb and .monitoring-logstash-* has less amount of data size same with the kibana

health status index                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-es-7-mb-2021.07.04 cZ0Rq2QSTsWaSVkYbUqCcA   1   1   93685117            0    135.4gb         67.7gb
green  open   .monitoring-es-7-mb-2021.07.05 soF59jJFTYqxUHagyxAH3g   1   1   94039120            0    137.5gb         68.7gb
green  open   .monitoring-es-7-mb-2021.07.06 7C9h4JdiSqqArvIq_KRmMQ   1   1   88497612            0    126.9gb         63.4gb
green  open   .monitoring-es-7-mb-2021.07.07 Z7Q53VgKSnm50Co1mOgPrw   1   1   26045340            0     39.4gb         20.8gb
green  open   .monitoring-es-7-mb-2021.07.01 34OMsmgVRruMjq5-E0UqXQ   1   1   91449387            0      133gb         66.5gb
green  open   .monitoring-es-7-mb-2021.07.02 TD848mdHRxSPzr9rL8p8ow   1   1   92942331            0    134.9gb         67.4gb
green  open   .monitoring-es-7-mb-2021.07.03 Jy4pGaFvQUyuwtGdYfDE-w   1   1   93367837            0    135.2gb         67.6gb

health status index                                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-logstash-7-mb-2021.07.07 aP4IvdvQQmWGqCxMk96nYg   1   1    1471408            0    164.9mb         82.5mb
green  open   .monitoring-logstash-7-mb-2021.07.05 _ahnIYIRTbihc5gGC_-7Wg   1   1    5819446            0    690.3mb        345.5mb
green  open   .monitoring-logstash-7-mb-2021.07.06 YPNxCJKjRByDTMC9HdbYOg   1   1    5271822            0    594.4mb          297mb
green  open   .monitoring-logstash-7-mb-2021.07.03 i66BsXz6SvmUFT0fP14E-Q   1   1    5806084            0    680.6mb        340.7mb
green  open   .monitoring-logstash-7-mb-2021.07.04 y6WR6VAnTuanZXCUaOxB0A   1   1    5806084            0    680.3mb        341.2mb
green  open   .monitoring-logstash-7-mb-2021.07.01 XHzk_U6XSuK2QCNMVfiQhA   1   1    5806084            0    682.4mb        340.4mb
green  open   .monitoring-logstash-7-mb-2021.07.02 gUtSibBZTCaHIxTLmt2xJw   1   1    5806084            0    685.1mb          342mb

health status index                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-kibana-7-mb-2021.07.06 JiQhlpMnT32mSBcezPPEvA   1   1      16410            0      6.9mb          3.4mb
green  open   .monitoring-kibana-7-mb-2021.07.07 Cs8bWPEXT_Op-86Iw1p9dw   1   1       4408            0        2mb            1mb
green  open   .monitoring-kibana-7-mb-2021.07.01 -_styBN1R3ybRkUZnkUapw   1   1      17280            0      6.8mb          3.4mb
green  open   .monitoring-kibana-7-mb-2021.07.04 HWKDJECvRYWCZj5JljKqPA   1   1      17280            0      6.7mb          3.4mb
green  open   .monitoring-kibana-7-mb-2021.07.05 -LFY0z1qQEmTDZJ8KnSFUA   1   1      17280            0      7.1mb          3.5mb
green  open   .monitoring-kibana-7-mb-2021.07.02 mIf16DvcRKGkdmcLfpWkuw   1   1      17280            0      6.9mb          3.4mb
green  open   .monitoring-kibana-7-mb-2021.07.03 rpWPBe3oRrSnam1_NpumIw   1   1      17280            0      6.7mb          3.3mb

we have enable elasticsearch-xpack module in metricbeat
elasticsearch-xpack.yml

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.10/metricbeat-module-elasticsearch.html

- module: elasticsearch
  xpack.enabled: true
  period: 10s
  metricsets:
    - cluster_stats
    - index
    - index_recovery
    - index_summary
    - node
    - node_stats
    - pending_tasks
    - shard
  hosts:
    - "https://xx.xx.xx.xx:9200" #em1
    - "https://xx.xx.xx.xx:9200" #em2
    - "https://xx.xx.xx.xx:9200" #em3
    - "https://xx.xx.xx.xx:9200" #ec1
    - "https://xx.xx.xx.xx:9200" #ec2
    - "https://xx.xx.xx.xx:9200" #ed1
    - "https://xx.xx.xx.xx:9200" #ed2
    - "https://xx.xx.xx.xx:9200" #ed3
    - "https://xx.xx.xx.xx:9200" #ed4
    - "https://xx.xx.xx.xx:9200" #ed5
    - "https://xx.xx.xx.xx:9200" #ed6
  scope: cluster
  ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
  username: "xxxx"
  password: "********"

Is there any way to control .monitoring-es index

I think you need to verify which metricsets you really need. Please also check if data are equally distributed across all ES instances (maybe there is one faulty ones that produces more metrics).

Are these ES instances heavily loaded?

which metricsets i should monitor

shards disk.indices disk.used disk.avail disk.total disk.percent host ip             node
    27      445.1gb   453.8gb    443.1gb      897gb           50 ed3  xx.xx.xx.xx ed3
    27      408.1gb   415.6gb    481.4gb      897gb           46 ed2  xx.xx.xx.xx  ed2
    28      582.8gb   590.1gb    306.8gb      897gb           65 ed4  xx.xx.xx.xx  ed4
    27      370.2gb   378.4gb    518.6gb      897gb           42 ed1  xx.xx.xx.xx  ed1
    27      399.5gb   406.5gb    490.5gb      897gb           45 ed6  xx.xx.xx.xx  ed6
    28      336.7gb   344.3gb    552.7gb      897gb           38 ed5  xx.xx.xx.xx  ed5

sometime but i can say 50%

Can anyone tell me the reason why it is consuming so much storage ?
i've change the elasticsearch-xpack.yml' file


# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.10/metricbeat-module-elasticsearch.html

- module: elasticsearch
  xpack.enabled: true
  period: 10s
  #metricsets:
  #- cluster_stats
  #- index
  #- index_recovery
  #- index_summary
  #- node
  #- node_stats
  #- pending_tasks
  #- shard
  hosts:
    - "https://xx.xx.xx.xx:9200" #em1
    - "https://xx.xx.xx.xx:9200" #em2
    - "https://xx.xx.xx.xx:9200" #em3
    - "https://xx.xx.xx.xx:9200" #ec1
    - "https://xx.xx.xx.xx:9200" #ec2
    - "https://xx.xx.xx.xx:9200" #ed1
    - "https://xx.xx.xx.xx:9200" #ed2
    - "https://xx.xx.xx.xx:9200" #ed3
    - "https://xx.xx.xx.xx:9200" #ed4
    - "https://xx.xx.xx.xx:9200" #ed5
    - "https://xx.xx.xx.xx:9200" #ed6
  scope: cluster
  ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
  username: "xxxx"
  password: "*****"

i am pasting disk allocation details also because i can't find the proper reason for this issue

shards disk.indices disk.used disk.avail disk.total disk.percent host ip             node
    30      602.4gb   610.4gb    286.5gb      897gb           68 ed5  xx.xx.xx.xx ed5
    30      474.9gb     482gb      415gb      897gb           53 ed2  xx.xx.xx.xx ed2
    31      614.4gb   622.3gb    274.7gb      897gb           69 ed4  xx.xx.xx.xx ed4
    30      611.4gb   620.4gb    276.6gb      897gb           69 ed1  xx.xx.xx.xx ed1
    30      592.9gb     600gb      297gb      897gb           66 ed3  xx.xx.xx.xx ed3
    31      480.8gb   489.6gb    407.3gb      897gb           54 ed6  xx.xx.xx.xx6 ed6

the .monitoring-es-7-mb-* indices is consuming 150gb of data sometime 200gb we need to control this.

health status index                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-es-7-mb-2021.07.22 3HPlFflpQhyOKmWjffeeVA   1   1  100754204            0    150.4gb         75.2gb
green  open   .monitoring-es-7-mb-2021.07.23 tygyvVn2QK2Ci2pNV82RPg   1   1  101040905            0    151.1gb         75.5gb
green  open   .monitoring-es-7-mb-2021.07.24 fTngkDBjTx2OfeXkRJa6uw   1   1  101039766            0    150.9gb         75.5gb
green  open   .monitoring-es-7-mb-2021.07.25 cQ55nE3UQi-e3C1YoxnLxw   1   1   66998162            0    112.8gb         56.2gb
green  open   .monitoring-es-7-mb-2021.07.20 oQXz7TYxTkG7gxxFzzhPiA   1   1   97679349            0    145.5gb         72.7gb
green  open   .monitoring-es-7-mb-2021.07.21 rFo_ZBfFSAu7wa3ozPP2bA   1   1   99654521            0      148gb           74gb
green  open   .monitoring-es-7-mb-2021.07.19 nAPCY-ivSEG5F1bOfO5jeA   1   1   95289281            0    141.5gb         70.7gb

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.