Heartbeat Kubernetes deployment manifests and icmp dashboards

Is there a heartbeat kubernetes deployment manifest that is available? (I only see auditbeat/filebeat/metricbeat yamls) Ideally this would show heartbeat best practices and or possible deployments in kubernetes.

Also are they any pre-built icmp dashboards or suggestions on how to best visually show when one host is pinging poorly compared to others?

We don't currently have one, but we do have an issue tracking that here: https://github.com/elastic/beats/issues/12093#issuecomment-490360108

Hi @akrzos,
Here is the manifest I have used in the past. It was written to go along with a scenario using the first alpha of ECK. Have a look, and ping me with any questions. As far as the dashboards go, I have not written any. Have a look at some of the existing dashboards at demo.elastic.co and I would think that you will find something useful.

Let me know how it goes, I will have a look at using this manifest with version 7.5.1 of Heartbeat outside of an ECK environment also.

update:
See the latest comment in the issue referenced above: https://github.com/elastic/beats/issues/12093#issuecomment-574971602

I ran this on a GKE cluster and am sending data to Elasticsearch Service in Elastic Cloud. It seems to be working fine.

Since you want to create your own visualizations, make sure that you create an index pattern, like heartbeat-*, and then you can use the visualizations app. Try out Kibana Lens, it is pretty cool.

cc: @Andrew_Cholakian1

I ended up just writing our own. The goal was to measure network latency between cluster hosts so we run as a daemonset and have all nodes icmp pinging other nodes. I would like to use the autodiscover feature to manage the list of hosts but I was unable to figure out if it can even do this for us.

https://gist.github.com/akrzos-bw/eca6fbd119cb19e7d72fa2ca7d2ac0fb#file-heartbeat-deploy-yaml

Here is a heartbeat.yml section of a manifest that pings all NGINX pods:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: heartbeat-deployment-config
  namespace: kube-system
  labels:
    k8s-app: heartbeat
data:
  heartbeat.yml: |-
    heartbeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                equals:
                  kubernetes.labels.app: "nginx"
              config:
                - type: icmp
                  hosts: ["${data.host}"]
                  schedule: "@every 30s"
    setup.ilm.check_exists: false
    cloud.auth: ${ELASTIC_CLOUD_AUTH}
    cloud.id: ${ELASTIC_CLOUD_ID}
    #===================== Logging ==========================
    # Available log levels are: error, warning, info, debug
    logging.level: error
    #==================== Monitoring ========================
    # Set to true to enable the monitoring reporter.
    monitoring.enabled: true
    output.elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
    setup.kibana:
      host: ${KIBANA_HOST}
---

I realize that is not what you are asking for, but it might point you in the right direction. Ping me with your progress.

Hmm so when I try this, it seems like each heartbeat daemon only discovers the pod on the node it is running.

The spec I tried - https://gist.github.com/akrzos-bw/9c3e27ee268ff7402b7abb7a4827b99b

And the output in one of the data directories:

{"@timestamp":"2020-01-16T18:54:35.288Z","@metadata":{"beat":"heartbeat","type":"_doc","version":"7.5.1"},"kubernetes":{"pod":{"uid":"94ad4d78-3890-11ea-8419-001a4aa8658c","name":"heartbeat-4pr74"},"node":{"name":"....."},"container":{"name":"heartbeat","image":"docker.elastic.co/beats/heartbeat:7.5.1"},"namespace":"akrzos-heartbeat","labels":{"controller-revision-hash":"2763027158","k8s-app":"akrzos-heartbeat","pod-template-generation":"2","app":"heartbeat"}},"ecs":{"version":"1.1.0"},"summary":{"up":1,"down":0},"rtt":{"us":328},"url":{"scheme":"icmp","domain":"...","full":"icmp://..."},"event":{"dataset":"uptime"},"host":{"name":"heartbeat-4pr74"},"agent":{"id":"e148fb08-2b84-487a-b4c5-abc694c5fe45","version":"7.5.1","type":"heartbeat","ephemeral_id":"96c57caf-702e-42ab-9317-a15bc4226c37","hostname":"heartbeat-4pr74"},"requests":1,"monitor":{"name":"","type":"icmp","id":"auto-icmp-0X9902177E642D83BE","check_group":"91a15c53-3891-11ea-b0a7-0a58ac1600de","ip":"....","status":"up","duration":{"us":437}}}

If I enable hostnetwork for the daemonset it gets even worse, there seems to be 0 discovered hosts to ping:

{"level":"error","timestamp":"2020-01-16T18:55:11.170Z","caller":"kubernetes/util.go:97","message":"kubernetes: Querying for pod failed with error: pods \"$NODE_NAME\" not found"}

@akrzos you might be onto something looking at changing the networks, maybe the IP address space for the hosts is different than for the pods. They are in a diff subnet.

Daemonsets work that way, the thinking is:

  • there is one monitoring pod per k8s node
  • the local monitoring pod only monitors its node

I am going to ask for help from developers who know more than I do. @jsoriano @exekias :: The goal is to use heartbeat to monitor the k8s nodes via ICMP ping. I do not know how to discover the k8s nodes. This is an autodiscover condition that works in Heartbeat to autodiscover and ping NGINX pods:

    heartbeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                equals:
                  kubernetes.labels.app: "nginx"
              config:
                - type: icmp
                  hosts: ["${data.host}"]
                  schedule: "@every 30s"

What we would like is to

  • discover all the nodes
  • ping the nodes

Thanks!

Hey,

That is expected, currently beats only discover resources on the node where they are running.

Upcoming Beats 7.6.0 will include a new scope option for kubernetes autodiscover that will allow to optionally discover resources globally in the whole cluster. It will support two values:

  • scope: node will behave as current versions, each beat monitoring only resources on the node where they are deployed. This will continue being the default.
  • scope: cluster will discover resources in the whole cluster.

You can read more about this new feature in https://github.com/elastic/beats/pull/14738

This seems to be caused by an unreplaced environment variable, but I don't see how you are using $NODE_NAME in your configuration.

Beats autodiscover only discovers pods, it cannot discover nodes. Something that could be tried in this case would be to start a dummy pod as a daemonset, so there is one on each node, possibly running with hostNetwork: true. Then the pods could be discovered with autodiscover, and their IP would be the same as the one of the node.

Hi Jamie,

Thanks for the thorough reply. The scope: cluster feature looks to be exactly what I would like here.

That actually is the "real" node name I just substituted it with $NODE_NAME since I do not post real host names on discussion forums.

As of today we just run a heartbeat pod on every node such that every node can ping all other nodes so we can see if any single node shows poor network performance between other nodes. The autodiscover would simply make the icmp configuration able to grow without us managing the configuration itself.

Thanks for all the help folks, I think I am set but of course more dashboards based on icmp would be sweet.

-Alex

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.