I am running version 7.8 of everything. I have a cluster of 3 instances, each running logstash and elasticsearch. On a 4th instance I run kibana with metricbeat and with a separate monitoring cluster. These are located in my "ELK" network.
In my main, business network, ach instance runs filebeat, auditbeat, packetbeat and journalbeat. For the time being I have only 3 such instances, ie 3*4=12 beats to be monitored by metricbeat. This single metricbeat (sitting on the same instance as Kibana) communicates with my main network over the public internet (I use firewall rules to keep the http endpoints of the beats protected) and for each instance it tries to report back the status of the beats installed.
The problem is that in the Stack Monitoring page of kibana I only see a subset of them, 6 to be exact. All 4 journalbeats are present but I can only see one of each other type. For example, the auditbeat from the 1st instance shows, but the other 2 do not. After a few seconds, this 1st-auditbeat
disappears and the 2nd-auditbeat
(of the 2nd instance) shows and so on. I can't get to show all auditbeats at once (or filebeats or packetbeats).
Here is an example module that I use:
---
- hosts: x.x.x.x:5069
metricsets:
- stats
- state
module: beat
period: 5s
service:
name: four-words-in-name
tags: [..]
xpack:
enabled: true
I have one file per such module, ie 12 files for the 12 beats in my business network. Of course, each beat on an instance runs on its own port. The service.name
is also unique per beat.
I have used tcpdump and confirmed that all 12 beats return responses every 5 minutes. I have no errors in kibana logs, neither in the logs of the monitoring cluster.
What am I doing wrong?