Stack Monitoring not working with Kubernetes containers

Hi all,

I'm trying to get stack monitoring working on a cluster that I have deployed for testing in a Kubernetes cluster. I was able to get the old self monitoring to work, but if I try to monitor with metricbeat everything just says it is offline and that there is no data coming in. I can see metricbeat data coming in through Discover and specifically data from the Elasticsearch and Kibana modules so I don't understand why the Stack Monitoring app isn't picking anything up. Anyone run into this before or have any ideas. I don't mind using the self monitoring but I think that is being removed in the future.

Here is what I am seeing in the stack monitoring app:

And here is what I'm seeing in Discover for event.modules:
image

Perhaps share your metricbeat manifest?

You want the metricbeat.yml file?

metricbeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    # Reload module configs as they change:
    reload.enabled: false

metricbeat.autodiscover:
  providers:
    - type: kubernetes
      scope: cluster
      namespace: elasticsearch
      node: ${NODE_NAME}
      templates:
        - condition:
            contains:
              kubernetes.labels.app: "elasticsearch"
          config:
            - module: elasticsearch
              metricsets:
                - node
                - node_stats
                - index
                - index_recovery
                - index_summary
                - shard
                #- ml_job
              period: 10s
              hosts: ["http://${data.host}:9200"]
              #username: "elastic"
              #password: "changeme"
              #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
              #index_recovery.active_only: true
              xpack.enabled: false
              scope: node
              enabled: true
        - condition:
            contains:
              kubernetes.labels.app: "kibana"
          config:
            - module: kibana
              metricsets:
                - status
                - stats
              period: 10s
              hosts: ["${data.host}:5601"]
              xpack.enabled: false
              enabled: true

metricbeat.modules:
#- module: docker
#  metricsets:
#    - "container"
#    - "cpu"
#    - "diskio"
#    - "healthcheck"
#    - "info"
    #- "image"
#    - "memory"
#    - "network"
#  hosts: ["unix:///var/run/docker.sock"]
#  period: 10s
#  enabled: true
  
#- module: elasticsearch
#  metricsets:
#    - node
#    - node_stats
#    - index
#    - index_recovery
#    - index_summary
#    - shard
#    #- ml_job
#  period: 10s
#  hosts: ["http://elasticsearch-master-headless:9200","http://elasticsearch-data-hot-headless:9200","http://elasticsearch-data-warm-headless:9200","http://elasticsearch-client-headless:9200"]
#  #username: "elastic"
#  #password: "changeme"
#  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#
#  #index_recovery.active_only: true
#  xpack.enabled: false
#  scope: node


# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "kibana:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================


processors:
  - add_metadata: ~
  - add_kubernetes_metadata: ~
  
output.elasticsearch:
  hosts: "http://es-client:9200"
 # username: '${ELASTICSEARCH_USERNAME:}'
 # password: '${ELASTICSEARCH_PASSWORD:}'

So a couple key settings that are required, just checking if you set these.

These setting are required in the cluster that is being monitored. Also when you switch over you may see both for a bit till it ages out...

For Elasticsearch

xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false

For Kibana

xpack.monitoring.collection.enabled: true
xpack.monitoring.kibana.collection.enabled: false

Thanks for that. I'll kill the test data that I have in here and try those settings. I don't specifically have those set with my env variables that I'm passing the containers so we will see what happens.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.