Metricbeat beat-xpack shows only 1 beat instance per type in Kibana monitoring in custom AWS cluster,

I'm trying to configure my logging cluster using metricbeat with the following modules:
Enabled:
elasticsearch-xpack
kibana-xpack
logstash-xpack
beat-xpack
system

whereas my metricbeat config looks like this:

# Ansible managed

################### metricbeat Configuration #########################

############################# metricbeat ######################################
http.enabled: true
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
monitoring.enabled: false
output.elasticsearch:
  hosts:
  - http://data.###########.eu:9200
  password: ####
  username: remote_monitoring_user
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.home: /usr/share/metricbeat
path.log: /var/log/metricbeat
processors:
- add_host_metadata: null
- add_cloud_metadata: null
setup.dashboards.directory: ${path.home}/kibana
setup.dashboards.enabled: true
setup.kibana:
  host: http://client.########.eu:5601
setup.template.settings:
  index.codec: best_compression
  index.number_of_shards: 1
tags: '[''elasticsearch_data_nodes'', ''elasticsearch_master_nodes'', ''elk'', ''es'']'
xpack.monitoring.elasticsearch.collection.enabled: false
xpack.monitoring.enabled: true


###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

output: {}


############################# Logging #########################################

logging:
  files:
    rotateeverybytes: 10485760
#name: elk7-######-master({ip-address})

In kibana monitoring all metricbeat monitored types are showing correct except for the beats.
The beats section shows only two beats. One metricbeat and one filebeat, and when I relead it shows randomly other nodes from the cluster.
I have metricbeat and filebeat running on all instances with the above config.
3 masters nodes (es running)
3 data ingest nodes (es running)
1 kibana client node (es running)
1 logstash node (no es running)

I duplicated the setup as much as possible locally on my mac using Vagrant, ansible, where I do not face the same issue with this config.

# Ansible managed

################### metricbeat Configuration #########################

############################# metricbeat ######################################
http.enabled: true
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
monitoring.enabled: false
output.elasticsearch:
  hosts:
  - http://elk-node-2:9200
  - http://elk-node-3:9200
  - http://elk-node-1:9200
  password: ###
  username: remote_monitoring_user
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.home: /usr/share/metricbeat
path.log: /var/log/metricbeat
processors:
- add_host_metadata: null
- add_cloud_metadata: null
setup.dashboards.directory: ${path.home}/kibana
setup.dashboards.enabled: true
setup.kibana:
  host: http://elk-node-1:5601
setup.template.settings:
  index.codec: best_compression
  index.number_of_shards: 1
tags: '[''elasticsearch_master_nodes'', ''elk'', ''es'', ''kibana'']'
xpack.monitoring.elasticsearch.collection.enabled: false
xpack.monitoring.enabled: true


###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

output: {}


############################# Logging #########################################

logging:
  files:
    rotateeverybytes: 10485760

So config wise the only difference is loadbalanced endpoint (AWS) vs direct node access (locally)

Things I already tried are disabling the module beat-xpath so it is self monitored
Connecting to HTTP://localhost:9200 iso through the loadbalancer.
None of these things make all beats visible in kibana monitoring.

My two cents are that in some way the beats are overriding each other, which would explain that only one metricbeat and one filebeat beat instance is visible in the monitoring app

I've not been able to find a similar issue, so I hope somebody can help me figure out what I'm doing wrong in my AWS setup.

I've just figured out that the beats are registering the same uuid in the .monitoring-beats index

Solved it.
For those running into the same issues.

My problem originates from the way I provision my cluster.
To do this I use terraform with packer and Ansible to provision nodes based on custom AMI's that have most services and static configurations already present.
What ansible does is during build start the beats, and as described in the Duplicate beat.uuids on different hosts topic, this will set the meta.json file containing the UUID.
So in my case, it was present on all nodes that were based on my elastic AMI, read all nodes in my cluster.

I need to remove the meta.json file from the AMI so the start of beat on a newly provisioned instance will generate the file with a unique UUID.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.