Uptime Monitors not working

Hi Community, I am using latest version of ES+Kibana+Heartbeat - I want to setup icmp uptime monitors for some of my hosts in my private network. Struggling with the configuration.
Please see my heartbeat .yml and my own icmp yml file I want to use:

Heartbeat.yml

############################# Heartbeat ######################################

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: /etc/heartbeat/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: true
  # How often to check for changes
  reload.period: 1s

# Configure monitors inline
heartbeat.monitors:
- type: icmp
  id: pshome
  name: E3DC S10E
  hosts: ["192.168.2.20"]
  schedule: "*/1 * * * * * *"
- type: icmp
  id: pshome
  name: Pi4 Solaranzeige
  hosts: ["192.168.2.192"]
  schedule: "*/1 * * * * * *"
- type: icmp
  id: pshome
  name: MySQL Server
  hosts: ["192.168.2.49"]
  schedule: "*/1 * * * * * *"

my own icmp file used for monitors

# /path to my icmp yml file
heartbeat monitors:
- type: icmp
  id: pshome
  name: E3DC-S10E
  enabled: true
  hosts: ["192.168.2.20"]
  schedule: "*/1 * * * * * *"
- type: icmp
  id: pshome
  name: iobroker
  enabled: true
  hosts: ["192.168.2.111"]
  schedule: "*/1 * * * * * *"
- type: icmp
  id: pshome
  name: Pi4 Solaranzeige
  enabled: true
  hosts: ["192.168.2.192"]
  schedule: "*/1 * * * * * *"
- type: icmp
  id: pshome
  name: MySQL Server   
  enabled: true
  hosts: ["192.168.2.49"]
  schedule: "*/1 * * * * * *"
  enabled: true

No host is displayed in the Uptime monitor - any hints ? I am pretty new to the Elastic Stack - so please be patient with me

just checked status of heart service - this error is shown:

heartbeat-elastic.service - Ping remote services for availability and log results to Elasticsearch or send to Logstash.
     Loaded: loaded (/lib/systemd/system/heartbeat-elastic.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Sat 2021-10-16 10:57:21 CEST; 6s ago
       Docs: https://www.elastic.co/beats/heartbeat
    Process: 2286 ExecStart=/usr/share/heartbeat/bin/heartbeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited, status=1/FAILURE)
   Main PID: 2286 (code=exited, status=1/FAILURE)

Okt 16 10:57:21 ubuntu-template systemd[1]: heartbeat-elastic.service: Scheduled restart job, restart counter is at 5.
Okt 16 10:57:21 ubuntu-template systemd[1]: Stopped Ping remote services for availability and log results to Elasticsearch or send to Logstash..
Okt 16 10:57:21 ubuntu-template systemd[1]: heartbeat-elastic.service: Start request repeated too quickly.
Okt 16 10:57:21 ubuntu-template systemd[1]: heartbeat-elastic.service: Failed with result 'exit-code'.
Okt 16 10:57:21 ubuntu-template systemd[1]: Failed to start Ping remote services for availability and log results to Elasticsearch or send to Logstash..
~

Meanwhile I managed to fix the error above - uptime monitor now showing 5 hosts with the below heartbeat.yml:

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: /etc/heartbeat/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: true
  # How often to check for changes
  reload.period: 5s

# Configure monitors inline
heartbeat.monitors:
- type: icmp
  id: pshome
  name: PS HomeOffice
  # Enable/Disable monitor
  enabled: true
  hosts: ["192.168.2.20","192.168.2.111","192.168.2.48","192.168.2.49","192.168.2.10"]
  schedule: "*/15 * * * * * *"

my own icmp file still not working - any hint? maybe some can share a owring icmp config file here as an example?

@Peter_Schlafmann are you by chance trying to run heartbeat as a non-root user? I believe there is an issue with heartbeat that requires it to be run as root in order for ICMP to properly work. I believe that [Heartbeat] Setuid to regular user / lower capabilities when possible by andrewvc is intended to solve this issue in 7.16.

Another thing I would look at is testing the configuration by using heartbeat's test command. To make sure that your whole config is correct (as it doesn't look like you've posted the full config).

thanks Ben for the response - idid run the test config command, result:
sudo heartbeat test config
[sudo] Passwort für peter:
Config OK
and yes I am running heartbeat with non-root user but all commands for install and config was done with sudo.

are you saying that using a own icmp file is not working with current version 7.15 ? due to non-root user ?

Yes, I believe you need to run heartbeat as root in order for the icmp probe to function correctly. (I haven't fully tested this, but based on the PR I linked, and the corresponding issues, it seems like icmp 7.15 currently requires heartbeat to be run as root)

thanks Ben, just to be clear - heartbeat is working and showing 5 host in the uptime view of Kibana; what is not working is my own icmp file created in the monitors.d path as displayed in the heartbeat.yml file:

################### Heartbeat Configuration Example #########################

# This file is an example configuration file highlighting only some common options.
# The heartbeat.reference.yml file in the same directory contains all the supported options
# with detailed comments. You can use it for reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/heartbeat/index.html

############################# Heartbeat ######################################

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: /etc/heartbeat/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: true
  # How often to check for changes
  reload.period: 5s

# Configure monitors inline
heartbeat.monitors:
- type: icmp
  id: pshome
  name: PS HomeOffice
  # Enable/Disable monitor
  enabled: true
  hosts: ["192.168.2.20","192.168.2.111","192.168.2.48","192.168.2.49","192.168.2.10"]
  schedule: "*/15 * * * * * *"
# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.2.43:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Heartbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.2.43:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "Aaron2310"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:
  - add_observer_metadata:
      # Optional, but recommended geo settings for the location Heartbeat is running in
      #geo:
        # Token describing this location
        #name: us-east-1a
        # Lat, Lon "
        #location: "37.926868, -78.024902"


# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Heartbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Heartbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the heartbeat.
#instrumentation:
    # Set to true to enable instrumentation of heartbeat.
    #enabled: false

    # Environment in which heartbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Okay, that is a bit more clear to me.

One thing I noticed in your original post, where you included what I believe to be your icmp file under monitors.d, is a typo:

You have:

heartbeat monitors:

But it's missing a .:

heartbeat.monitors:

Can you confirm whether this is a typo in your actual file, or just in this post?

correct it was a typo in my own icmp file - corrected it to:

 /path to my icmp yml file
heartbeat.monitors:
- type: icmp
  id: pshome
  name: E3DC-S10E
  enabled: true
  hosts: ["192.168.2.20"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: iobroker
  enabled: true
  hosts: ["192.168.2.111"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: Pi4 Solaranzeige
  enabled: true
  hosts: ["192.168.2.192"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: MySQL Server
  enabled: true
  hosts: ["192.168.2.49"]
  schedule: "*/15 * * * * * *"
  enabled: true
- type: icmp
  id: pshome
  name: InfluxDB Server
  enabled: true
  hosts: ["192.168.2.48"]
  schedule: "*/15 * * * * * *"
  enabled: true
- type: icmp
  id: pshome
  name: Domino Server
  enabled: true
  hosts: ["192.168.2.45"]
  schedule: "*/15 * * * * * *"
  enabled: true

is the syntax okay?

Actually, looking at the example provided files for heartbeat's monitors.d folder, I don't think you need heartbeat.monitors at all.

You should be able to just have:

- type: icmp
  id: pshome
  name: E3DC-S10E
  enabled: true
  hosts: ["192.168.2.20"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: iobroker
  enabled: true
  hosts: ["192.168.2.111"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: Pi4 Solaranzeige
  enabled: true
  hosts: ["192.168.2.192"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome
  name: MySQL Server
  enabled: true
  hosts: ["192.168.2.49"]
  schedule: "*/15 * * * * * *"
  enabled: true
- type: icmp
  id: pshome
  name: InfluxDB Server
  enabled: true
  hosts: ["192.168.2.48"]
  schedule: "*/15 * * * * * *"
  enabled: true
- type: icmp
  id: pshome
  name: Domino Server
  enabled: true
  hosts: ["192.168.2.45"]
  schedule: "*/15 * * * * * *"
  enabled: true

Here is the example file that heartbeat provides for icmp:

# These files contain a list of monitor configurations identical
# to the heartbeat.monitors section in heartbeat.yml
# The .example extension on this file must be removed for it to
# be loaded.

- type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping
  # ID used to uniquely identify this monitor in elasticsearch even if the config changes
  id: my-icmp-monitor

  # Human readable display name for this service in Uptime UI and elsewhere
  name: My ICMP Monitor

  # Name of corresponding APM service, if Elastic APM is in use for the monitored service.
  #service.name: my-apm-service-name

  # Enable/Disable monitor
  #enabled: true

  # Configure task schedule using cron-like syntax
  schedule: '@every 5s' # every 5 seconds from start of beat

  # List of hosts to ping
  hosts: ["localhost"]

  # Configure IP protocol types to ping on if hostnames are configured.
  # Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
  ipv4: true
  ipv6: true
  mode: any

  # Total running time per ping test.
  timeout: 16s

  # Waiting duration until another ICMP Echo Request is emitted.
  wait: 1s

  # The tags of the monitors are included in their own field with each
  # transaction published. Tags make it easy to group servers by different
  # logical properties.
  #tags: ["service-X", "web-tier"]

  # Optional fields that you can specify to add additional information to the
  # monitor output. Fields can be scalar values, arrays, dictionaries, or any nested
  # combination of these.
  #fields:
  #  env: staging

  # If this option is set to true, the custom fields are stored as top-level
  # fields in the output document instead of being grouped under a fields
  # sub-dictionary. Default is false.
  #fields_under_root: false

just managed to get it working !!! here is a partmy working icmp file:
the field Id: must be unique !!!

# /path to my icmp yml file
- type: icmp
  id: pshome1
  name: E3DC-S10E
  enabled: true
  hosts: ["192.168.2.20"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome2
  name: iobroker-Server
  enabled: true
  hosts: ["192.168.2.111"]
  schedule: "*/15 * * * * * *"
- type: icmp
  id: pshome3
  name: Pi4 Solaranzeige
  enabled: true
  hosts: ["192.168.2.192"]
  schedule: "*/15 * * * * * *"

and here is the part of my heartbeat.yml file:

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: /etc/heartbeat/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: true
  # How often to check for changes
  reload.period: 5s

# Configure monitors inline
heartbeat.monitors:
- type: icmp
  id: pshome
  name: PS HomeOffice
  #Enable/Disable monitor
  enabled: false
  #hosts: ["192.168.2.20","192.168.2.111","192.168.2.48","192.168.2.49","192.168.2.10"]
  schedule: "*/15 * * * * * *"

thanks again for your support - hope my input can help others.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.