I would like to monitor an existing Elasticsearch node (ES A) (in Docker container) to control the performance concerning a distinct index, because the CPU of the node increases to 200%-500% intermittently for a few hours. It would be interesting to know which action (creating/updating an index) is causing the load. Following the documentation I installed the following scenario:
(ES A, existing,7.17.9 7.17.9, Server 1 S1) - Metricbeat MB (new,8.7.1, S1) - ES B (new, 8.7.1, S2) - Kibana (new, 8.7.1, S2).
In Kibana in the Discover.Dashboard metricbeat* general data is displayed by EA_A, also data of the Docker container, but only with the module "system" in metricbeats (beside elasticsearch-xpack). If you disable this, no entry is made. In no case data/activities of the indexes of the ES_A are displayed.
Is this the way it should be or do I need more components (e.g. Logstash) or is my configuration wrong?
I would be very grateful for your answer.
configs:
**** Metricbeat MB (neu,8.7.1, S1, ip = IP_S1) *******
./metricbeat modules list => Enabled:
elasticsearch-xpack
system
== metricbeat.yml
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.dashboards.enabled: true
setup.kibana:
host: "IP_S2:5601"
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
hosts: ["IP_S2:9200"]
protocol: "https"
username: "elastic"
password: "..."
ssl.ca_trusted_fingerprint: ".."
allow_older_versions: true
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
metricbeat.modules:
setup.ilm.overwrite: true
======== modules.d/elasticsearch-xpack.yml =======
- module: elasticsearch
xpack.enabled: true
metricsets:
- node
- node_stats
- cluster_stats
- index
- index_recovery
- shard
- index_summary
- pending_tasks
period: 5s
hosts: ["http://IP_S1:9202"]
enabled: false
====== modules.d/system.yml
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
- socket_summary
process.include_top_n:
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
- module: system
period: 15m
metricsets:
- uptime
**** ES B (neu, 8.7.1, S2, ip = IP_S2) *******
====== config/elasticsearch.yml ==============
cluster.name: my-cluster
node.name: node-1
network.host: IP_S2
cluster.initial_master_nodes: ["IP_S2"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
xpack.monitoring.collection.enabled : true
**** Kibana (neu, 8.7.1, S2, ip = IP_S2) *******
====== config/kibana.yml ==============
server.host: "IP_S2"
elasticsearch.hosts: ['https://IP_S2:9200']
elasticsearch.serviceAccountToken: ...
elasticsearch.ssl.certificateAuthorities: [/usr/local/share/elk/kibana/kibana-8.7.1/data/ca_1683992999967.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://IP_S2:9200'], ca_trusted_fingerprint: ...}]