Observing index of an Elastic node with Metricbeat not possible?

I would like to monitor an existing Elasticsearch node (ES A) (in Docker container) to control the performance concerning a distinct index, because the CPU of the node increases to 200%-500% intermittently for a few hours. It would be interesting to know which action (creating/updating an index) is causing the load. Following the documentation I installed the following scenario:
(ES A, existing,7.17.9 7.17.9, Server 1 S1) - Metricbeat MB (new,8.7.1, S1) - ES B (new, 8.7.1, S2) - Kibana (new, 8.7.1, S2).

In Kibana in the Discover.Dashboard metricbeat* general data is displayed by EA_A, also data of the Docker container, but only with the module "system" in metricbeats (beside elasticsearch-xpack). If you disable this, no entry is made. In no case data/activities of the indexes of the ES_A are displayed.
Is this the way it should be or do I need more components (e.g. Logstash) or is my configuration wrong?

I would be very grateful for your answer.


**** Metricbeat MB (neu,8.7.1, S1, ip = IP_S1) *******
 ./metricbeat modules list => Enabled:
== metricbeat.yml
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  index.number_of_shards: 1
  index.codec: best_compression
setup.dashboards.enabled: true
  host: "IP_S2:5601"
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
  hosts: ["IP_S2:9200"]
  protocol: "https"
  username: "elastic"
  password: "..."
  ssl.ca_trusted_fingerprint: ".."
  allow_older_versions: true
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
setup.ilm.overwrite: true

======== modules.d/elasticsearch-xpack.yml =======
- module: elasticsearch
  xpack.enabled: true
    - node
    - node_stats
    - cluster_stats
    - index
    - index_recovery
    - shard
    - index_summary
    - pending_tasks
  period: 5s
  hosts: ["http://IP_S1:9202"]
  enabled: false
======  modules.d/system.yml
- module: system
  period: 10s
    - cpu
    - load
    - memory
    - network
    - process
    - process_summary
    - socket_summary
- module: system
  period: 1m
    - filesystem
    - fsstat
  - drop_event.when.regexp:
      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
- module: system
  period: 15m
    - uptime
**** ES B (neu, 8.7.1, S2, ip = IP_S2) *******
====== config/elasticsearch.yml ==============
cluster.name: my-cluster
node.name: node-1
network.host: IP_S2
cluster.initial_master_nodes: ["IP_S2"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
  enabled: true
  keystore.path: certs/http.p12
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
xpack.monitoring.collection.enabled : true

**** Kibana (neu, 8.7.1, S2, ip = IP_S2) *******
====== config/kibana.yml ==============
server.host: "IP_S2"
elasticsearch.hosts: ['https://IP_S2:9200']
elasticsearch.serviceAccountToken: ...
elasticsearch.ssl.certificateAuthorities: [/usr/local/share/elk/kibana/kibana-8.7.1/data/ca_1683992999967.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://IP_S2:9200'], ca_trusted_fingerprint: ...}]
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

Since you're using the X-pack mode for the Elasticsearch module, the data is routed to .monitoring-* instead of the standard metricbeat-* which is the likely reason you're not seeing it in Discover.

You can either use the Stack Monitoring UI if it shows the data you're looking for or create a new data view for the monitoring data by enabling hidden indices to be targeted and specifying the .monitoring-* pattern.
Or use the non-X-pack mode for the Elasticsearch module.

As a further detail, the setting xpack.monitoring.collection.enabled isn't needed when using external monitoring via Metricbeat or Elastic Agent. That flag enables the internal collection mechanism.