Stack Monitoring - Elasticsearch Nodes not displayed as monitored with Metricbeat

Hi,

I'm trying to set up Stack Monitoring and I went through all the steps from here: Collecting Elasticsearch monitoring data with Metricbeat | Elasticsearch Guide [8.2] | Elastic

  1. Enabled the collection of monitoring data. Check the current settings below:
    "xpack" : {
      "monitoring" : {
        "elasticsearch" : {
          "collection" : {
            "enabled" : "false"
          }
        },
        "collection" : {
          "enabled" : "true"
        }
      }
    }
  1. Installed metricbeat on each node and enabled the elasticsearch-xpack module:
- module: elasticsearch
  xpack.enabled: true
  period: 10s
  scope: node
  hosts: [ "https:/elasticsearch01:9200", "https:/elasticsearch02:9200", "https://elasticsearch03:9200"]
  username: "user"
  password: "password"
  ssl.certificate_authorities: ["/etc/metricbeat/cert.crt"]
  1. Sent the monitoring data to the same cluster:
output.elasticsearch:
  hosts: [ "https:/elasticsearch01:9200", "https:/elasticsearch02:9200", "https://elasticsearch03:9200"]
  protocol: "https"
  username: "user"
  password: "password"
  output.elasticsearch.ssl.certificate_authorities:
    - "/etc/metricbeat/cert.crt"

  1. Started metricbeat.
  2. Disabled the default collection of Elasticsearch monitoring metrics.
  3. Disabled the system module.

However, this is the result from Stack Monitoring. Here "some" means all hot and warm nodes.

What am I missing here?
Any ideas/suggestions would be much appreciated.

Other details:

  • Elastic version: 8.2.0
  • Metricbeat version: 8.2.0
  • the cluster has 3 dedicated master nodes. The best practices in our case suggest to set scope: cluster in the elasticsearch-xpack module which will require a single cluster endpoint in the hosts: [] that will not direct requests to dedicated master nodes. Basically a load-balancer for hot & warm nodes (...in my understanding)? However, at the moment considering this is still not clear for us, we decided to go with scope: node even if this means additional load on the elected master node.

If anyone comes across this again:

  • I noticed the following error keep repeating in the Metricbeat logs:
{"type":"mapper_parsing_exception","reason":"failed to parse field [elasticsearch.node.stats.os.cgroup.memory.limit.bytes] of type [byte] in document with id 'redacted'. Preview of field's value: 'max'","caused_by":{"type":"number_format_exception","reason":"For input string: "max""}}, dropping event!
  • So I went to .monitoring-es-mb Index Template and changed the type of elasticsearch.node.stats.os.cgroup.memory.limit.bytes from Numeric/Long to Keyword;
  • I rolled over the .monitoring-es-8-mb data stream;
  • Metricbeat was no longer erroring out and Node Information started to be displayed in Stack Monitoring.

Thanks Andrei! We have that issue captured in [Stack Monitoring] Mapping for elasticsearch.node.stats.os.cgroup.memory.limit.bytes is incorrect · Issue #31765 · elastic/beats · GitHub - feel free to comment with any other info you think might be helpful.

1 Like

Great! Don't have any additional input at the moment. Since I changed the type to keyword, Stack Monitoring seems to be working properly.

However, thanks for this link. I didn't know this is where I should have checked the field type first:

Nice! Thanks for letting me know and glad you found a workaround.

Just note that your template change could get reverted.

The template ships with Elasticsearch and can get overwritten when nodes start up.

A permanent fix will be to upgrade to the next release after the issue is closed.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.