ELK cluster monitoring with Metricbeat

I am trying to set my cluster with metricbeat monitoring.
follow the document. setup everything but all my node just keep changing
what am i missing?

elasticsearch.yml file on all system
xpack.monitoring.collection.enabled: true
xpack.monitoring.history.duration: 30d
xpack.monitoring.collection.interval: 60s

kibana.yml
xpack.monitoring.min_interval_seconds: 60

enable module on all systems.
[root@houelktst03 modules.d]# ls -la |grep -v disable
-rw-r--r-- 1 root root 265 Sep 19 16:10 elasticsearch-xpack.yml
-rw-r--r-- 1 root root 257 Sep 19 15:42 kibana.yml
-rw-r--r-- 1 root root 262 Sep 19 15:42 logstash.yml
-rw-r--r-- 1 root root 822 Sep 19 15:42 system.yml

image

image

any clue, idea?

Hey @elasticforme

Can you please send me the results of the following:
GET _cat/indices

We identify metricbeat usage through the index name. My guess is that you also have:
xpack.monitoring.elasticsearch.collection.enabled set to true, thus getting both types of indices

Here are some sources to help with some setup configurations:


It is not remote monitoring cluster. same cluster monitoring itself. This is test env.

green open .monitoring-kibana-7-2020.09.19    4trDAXCtSMmEfq5e5p7mrg 1 1   8480    0     3mb    1.5mb
green open .kibana-event-log-7.9.1-000001     iU8ulTDfTyuCaT4qe_fXiQ 1 1     10    0  66.3kb   33.1kb
green open .apm-agent-configuration           ZVC0Vde-Tom13vXOEo3W4Q 1 1      0    0    416b     208b
green open .monitoring-es-7-mb-2020.09.19     X0_gJI8CTNGi8KKffaVucw 1 1   5267    0   9.1mb    4.5mb
green open .kibana_2                          -5FfxVZ7TWeYaVs8zQ9svQ 1 1    204    5  21.2mb   10.6mb
green open .monitoring-beats-7-2020.09.19     MInRfFpSQUawNnnfq3YSxA 1 1  10015    0  10.7mb    5.4mb
green open .kibana_1                          15dlaM-CTNmhPhEofCwQhQ 1 1     56    5  20.8mb   10.4mb
green open .security-7                        4CxH97t9SF29i5RopZh5pg 1 1     49    0 226.7kb  102.6kb
green open .monitoring-es-7-mb-2020.09.21     SU50SWA5QLadklc7CwIRvw 1 1  25484    0  38.9mb   19.5mb
green open .apm-custom-link                   kqQGCMFmT0C1rUIsR0NQeA 1 1      0    0    416b     208b
green open .monitoring-es-7-mb-2020.09.20     BmHQeS9IRD2vKQFoSdKu0A 1 1  35822    0  58.4mb   28.9mb
green open .kibana_task_manager_1             y9Yf90M5RCKj9sbRZztnJg 1 1      6 7653   1.9mb 1004.8kb
green open .monitoring-es-7-2020.09.19        1kDaxbx3Q1yfy2dV2BOqtw 1 1    989  788     2mb    1.1mb
green open .monitoring-kibana-7-2020.09.21    AC9Pz11sQdGw8WOGfNT8MA 1 1  32810    0   9.9mb    4.9mb
green open .monitoring-beats-7-2020.09.21     Gl51gsn9QuCBW1BXwD72gg 1 1  14117    0  14.6mb    7.2mb
green open .monitoring-kibana-7-2020.09.20    R1HB6y5tSZWzrVfIza4s4w 1 1  51834    0  15.9mb    7.9mb
green open metricbeat-7.9.1-2020.09.19-000001 F9sJ_IidQ7SNrf5uE-ir_Q 1 1 264878    0 125.3mb   62.6mb
green open .monitoring-beats-7-2020.09.20     aBgATE5YQuuCknovo3KhcA 1 1  30240    0  29.7mb   14.7mb


cat /etc/elasticsearch/elasticsearch.yml  |grep xpack.monitoring
xpack.monitoring.collection.enabled: true
xpack.monitoring.history.duration: 30d
xpack.monitoring.collection.interval: 60s

It isn't recommended to run metricbeat on the same node/machine as the ES stack (prod). This can results in multiple collectors running at the same time (if default is not disabled). You can either "simulate" separate nodes with docker containers w/ xpack.monitoring.elasticsearch.collection.enabled: false, or adjust your current environment to not collect local metrics for that same machine:

PUT _cluster/settings
{
  "persistent": {
    "xpack": {
      "monitoring": {
        "elasticsearch": {
          "collection": {
            "enabled": false
          }
        },
        "exporters": {
          "__no-default-local__": {
            "type": "local",
            "enabled": false
          }
        }
      }
    }
  }
}

and then delete any none metricbeat es monitoring indices (which don't have -mb- in the name) eg:
DELETE monitoring-es-7-2020.09.19*

1 Like

I did everything what it says. but still can't make this work. this concept is not going in my mind.

Poroduction cluster did this

PUT _cluster/settings
{
  "persistent": {
    "xpack.monitoring.collection.enabled": true
  }
}
PUT _cluster/settings
{
  "persistent": {
    "xpack.monitoring.elasticsearch.collection.enabled": false
  }
}

[root@elkdev01 metricbeat]# cat metricbeat.yml |grep -v '#' |sed '/^$/d'
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
output.elasticsearch:
  hosts: ["http://elktst01:9200"]  --> this is my monitoring cluster node
  username: "elastic"
  password: "elastic"

I am so confuse now.

I want to monitor dev01 with tst01 as you suggested I setup monitoring cluster seperate.
where do I go in kibana and check on dev01 or on tst01?

ok finally all is working after testing different thing. I understood how it works.

basically you send your production data to cluster which is going to monitor this production cluster
and on that system which is going to act as monitoring cluster needs to disable all collection and monitoring.

one piece which I was missing was to put monitoring.ui.elasticsearch.hosts: monitoringnode:9200

in kibana.yml on production

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.