Kibana Stack monitoring page not showing data from metricbeats

I have been trying to setup a separate monitoring cluster for collecting monitoring data from the production cluster. I have installed metricbeat on each of the production cluster's nodes and metricbeats is successfully sending the metric related data to monitoring cluster. I have a separate kibana instance which is part of the monitoring cluster. On the kibana instance, there is a coordinating node configuration too and the source for kibana monitoring data is

http://0.0.0.0:9200

However, I am not seeing any data on the kibana's monitoring page. I see the default page where it asks for options enabling the monitoring.

Is there anything that I am missing?

NOTE: the production cluster is security enabled but the monitoring cluster is not.

Few more info:

Kibana version: 7.7.0
Elasticsearch: 7.7.0
metricbeat: 7.7.1

License: Basic

I am seeing below page, but no data

Indices have data

1 Like

Hey @souravsahoo,

It's kind of hard to tell, what the problem might be without seeing your cluster settings and yml files, but looks like you do have the correct monitoring ES index with the -mb- tag. I think you might have missed a step where you need to set: "xpack.monitoring.collection.enabled": true.

Here is an overall setup process: https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html

@Igor_Zaytsev, Thanks for replying. I have followed the same doc and I don't think I missed any step. Below are the excerpts from the yamls being used

Production ES settings (Datanode & Master node)

xpack.monitoring.elasticsearch.collection.enabled: false

xpack.monitoring.collection.enabled: true

The above settings are applied on each of the master and data nodes.

NOTE: x-pack security is enabled in production cluster

The separate monitoring cluster has 3 master nodes with 2 data nodes and 1 coordinating node. Coordinating node and Kibana runs on the same machine.

There are no monitoring settings applied on any of the nodes in monitoring cluster, except on Kibana, below are the settings:-

server.host: "0.0.0.0"
elasticsearch.hosts: ["http://0.0.0.0:9200"]
monitoring.ui.enabled: true
xpack.monitoring.kibana.collection.enabled: false

NOTE: x-pack security is not enabled on the monitoring cluster

Kibana instance is a part of the monitoring cluster

Below is the metricbeat.yml (metricbeat is running on every node of the production ES cluster)

metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
#_source.enabled: false

setup.kibana:
host: "x.x.x.x:5601"

output.elasticsearch:
# Array of hosts to connect to.
hosts: ["x.x.x.x:9200"]

processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~

elasticsearch-xpack.yml

Let me know, if any more info will help.

I followed, the below docs

Hi @Igor_Zaytsev, could you please give me some pointers on this? There might be some misconfiguration that I am unable to catch. However, I have followed the docs properly. Appreciate any help on this.

@Igor_Zaytsev - Whether this could be a license issue? I have basic license.

Can you paste a properly formatted markdown yaml? All the indentation in the one you have posted is incorrect and it's often a simple indentation problem.

Also don't forget to run setup in metricbeat if your indentation is correct

Hi @Mario_Castro, sorry about the indentation problem above. I have pasted the yamls again with a better readabilty and proper indentation.

metricbeat.yml

#==========================  Modules configuration ============================

  metricbeat.config.modules:
    # Glob pattern for configuration loading
    path: ${path.config}/modules.d/*.yml
  
    # Set to true to enable config reloading
    reload.enabled: false
  
  #==================== Elasticsearch template setting ==========================
  
  setup.template.settings:
    index.number_of_shards: 1
    index.codec: best_compression
    #_source.enabled: false
  
  #============================== Kibana =====================================
  
  # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  # This requires a Kibana endpoint configuration.
  setup.kibana:
    host: "x.x.x.x:5601"
  
  #-------------------------- Elasticsearch output ------------------------------
  output.elasticsearch:
    # Array of hosts to connect to.
    hosts: ["x.x.x.x:9200"]
  
  #================================ Processors =====================================
  
  # Configure processors to enhance or manipulate events generated by the beat.
  
  processors:
    - add_host_metadata: ~
    - add_cloud_metadata: ~
    - add_docker_metadata: ~
    - add_kubernetes_metadata: ~
  
  #================================ Logging =====================================
  monitoring.enabled: true
  monitoring.elasticsearch:
    hosts: ["x.x.x.x:9200"]

elasticsearch-xpack.yml

- module: elasticsearch
  metricsets:
    - ccr
    - cluster_stats
    - enrich
    - index
    - index_recovery
    - index_summary
    - ml_job
    - node_stats
    - shard
  period: 10s
  hosts: ["https://production-cluster-dns:9200"]
  username: "elastic"
  password: "*****"
  xpack.enabled: true

kibana.yml (kibana part of monitoring cluster)

server.host: "0.0.0.0"
elasticsearch.hosts: ["http://0.0.0.0:9200"]
monitoring.ui.enabled: true
monitoring.kibana.collection.enabled: false

Production ES cluster monitoring settings

xpack.monitoring.elasticsearch.collection.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.collection.cluster.stats.timeout: "20s"

Hi @souravsahoo,

Thanks for all the information so far. It's been very helpful!

I have two requests:

  1. Can you share any security related settings on the ES monitoring cluster? Are you omitting xpack.security.enabled? Or is it set to false, like xpack.security.enabled: false? I'm wondering if this is related to https://github.com/elastic/kibana/issues/62973

  2. Let's take a look at the ES monitoring data that lives on the monitoring cluster. Can you return the results of this query?

POST .monitoring-es-*/_search
{
  "size": 0,
  "aggs": {
    "clusters": {
      "terms": {
        "field": "cluster_uuid",
        "size": 20
      },
      "aggs": {
        "types": {
          "terms": {
            "field": "type",
            "size": 10
          },
          "aggs": {
            "last_seen": {
              "max": {
                "field": "timestamp"
              }
            }
          }
        }
      }
    }
  }
}

Hi @chrisronline, Thanks for looking into this. However, I made it work. Now the monitoring UI in Kibana is able to show the data. The issue was with enabling the elasticsearch module, I had to explicitly provide the metricbeats.yml path while enabling the module.

Hey @chrisronline, i am facing this same issue again. Here are the info you asked last time

Security related settings in monitoring cluster

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true

xpack.security.transport.ssl.verification_mode: certificate

xpack.security.transport.ssl.key: certs/pkey.key

xpack.security.transport.ssl.certificate: certs/chain.pem

xpack.security.transport.ssl.certificate_authorities: [ "certs/Root.pem" ]

xpack.security.http.ssl.enabled: true

xpack.security.http.ssl.verification_mode: certificate

xpack.security.http.ssl.key: certs/pkey.key

xpack.security.http.ssl.certificate: certs/chain.pem

xpack.security.http.ssl.certificate_authorities: [ "certs/Root.pem" ]

xpack.monitoring.elasticsearch.collection.enabled: false

xpack.monitoring.collection.enabled: true

xpack.monitoring.enabled: true

xpack.monitoring.collection.cluster.stats.timeout: "20s"

ES monitoring data on monitoring cluster. Query result

{
  "took" : 417,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 624,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "clusters" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "Rue_OemsQiiZWA6ARmLZsw",
          "doc_count" : 624,
          "types" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "node_stats",
                "doc_count" : 624,
                "last_seen" : {
                  "value" : 1.593726083966E12,
                  "value_as_string" : "2020-07-02T21:41:23.966Z"
                }
              }
            ]
          }
        }
      ]
    }
  }
}

Could you please help?

NOTE: This time the difference is, i have enabled security on the monitoring cluster too

Thanks in advance!

Hi @souravsahoo,

It looks like Metricbeat is only pulling a single metricset (node_stats) when it should be pulling the ones in your elasticsearch-xpack.yml file:

metricsets:
    - ccr
    - cluster_stats
    - enrich
    - index
    - index_recovery
    - index_summary
    - ml_job
    - node_stats
    - shard

Can you double check for any errors in the Metricbeat log file? Then, also double check the configuration of the elasticsearch-xpack module to ensure all metricsets are included.

Hi @chrisronline, I am able to fix the above issue and now the cluster_stats started flowing to the monitoring cluster. However, I am still unable to see the monitoring data in the monitoring cluster's monitoring page. There are no errors in the metricbeats log file as well.

The configuration in the elasticsearch-xpack module looks absolutely fine to me. In the monitoring cluster I am getting all the expected indexes being created and data is collected. The only , issue is with the monitoring page, where i do not see any data.

Any help on this?

@chrisronline, one finding is, the monitoring cluster should identify the cluster_uuid of the production cluster of which it is holding the data. However, I can see the cluster_uuid is of the monitoring cluster in the url query param.

Production cluster's elasticsearch monitoring settings

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.cluster.stats.timeout: "20s"

Monitoring cluster's elasticsearch monitoring settings

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.cluster.stats.timeout: "20s"

Production cluster's kibana monitoring settings

monitoring.kibana.collection.enabled: false

Monitoring cluster's kibana monitoring settings

monitoring.ui.enabled: true
monitoring.kibana.collection.enabled: false
monitoring.ui.elasticsearch.hosts: ["http://0.0.0.0:9200"]

@chrisronline On further analysis, it looks like the sending cluster is sending the cluster_stats however, the monitoring cluster is not receiving it. Below is the monitoring data in the monitoring cluster. It has only node_stats metricset

{
  "took" : 377,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "clusters" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "95QSL0hRSnqNA3RMKl0Y3g",
          "doc_count" : 42946,
          "types" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "node_stats",
                "doc_count" : 42946,
                "last_seen" : {
                  "value" : 1.596227081693E12,
                  "value_as_string" : "2020-07-31T20:24:41.693Z"
                }
              }
            ]
          }
        }
      ]
    }
  }
}

It is strange that the data is lost somewhere

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.