No Uptime Monitors found

Hello!

After installing a heartbeat-elastic, I'm not able to see endpoints under "Monitors" in Uptime page. The everything what it show is "No Uptime Monitors found", even though it shows data in "Pings over time" and in the main dashboard under heartbeat-* index pattern (Screenshot attached)

I've also tried to disable auto-create-index and then delete all indices and index templates related to heartbeat and ran setup, rechecked my heartbeat.yml file, even did all the steps of installing and configuring heartbeat from the scratch, but it still does not show any monitors. Could you please help me out with this issue?

Thank you in advace! My heartbeat.yml file is also attached. Version of beats: 7.10.0

###################Heartbeat Configuration Example #########################

# This file is an example configuration file highlighting only some common options.
# The heartbeat.reference.yml file in the same directory contains all the supported options
# with detailed comments. You can use it for reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/heartbeat/index.html

############################# Heartbeat ######################################

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: ${path.config}/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: false
  # How often to check for changes
  reload.period: 5s

# Configure monitors inline
heartbeat.monitors:
- type: http
  name: http_monitor
  enabled: true
  urls: ["http://localhost:9200"]
  schedule: '@every 5s'

- type: http
  name: http_monitor1
  enabled: true
  urls: ["http://localhost:5601"]
  schedule: '@every 5s'

  # Total test connection and data exchange timeout
  #timeout: 16s
  # Name of corresponding APM service, if Elastic APM is in use for the monitored service.
  #service_name: my-apm-service-name

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Heartbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["<logstash:port>"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:
  - add_observer_metadata:
      # Optional, but recommended geo settings for the location Heartbeat is running in
      #geo:
        # Token describing this location
        #name: us-east-1a
        # Lat, Lon "
        #location: "37.926868, -78.024902"


# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Heartbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Heartbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the heartbeat.
#instrumentation:
    # Set to true to enable instrumentation of heartbeat.
    #enabled: false

    # Environment in which heartbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

What version of Kibana are you running? We fixed a bug causing this issue a ways back.

If it's possible, could you try updating to the latest version of the stack and see if the problem persists?

Hello Andrew!

Thank you for your message! Version of my Kibana is 7.9.1.
I would like to update the version of the stack, but it's kind of production sides and won't be possible at the moment.
Is there any different workaround?

I've tried the following steps and none of them is working:

  1. Deleted all indices, index patterns and templates, ran heartbeat setup, restarted heartbeat-elastic service, it runs well – no uptime monitors found
  2. Deleted heartbeat-elastic with all dependencies on the machine, reinstalled heartbeat-elastic v7.10.0, deleted all indices, index patterns and templates, ran heartbeat setup, restarted heartbeat-elastic service, it runs well – no uptime monitors found
  3. Deleted heartbeat-elastic v7.10.0 with all dependencies on the machine, installed the last version of heartbeat-elastic (v7.13.3), deleted all indices, index patterns and templates, ran heartbeat setup, restarted heartbeat-elastic service, it runs well – no uptime monitors found
  4. Did the same above steps, but with ilm overwrite configuration, it runs well – no uptime monitors found.
  5. Did the same above steps, but with Elasticsearch output only, it runs well - no uptime monitors found.
  6. Did the same above steps, but with Logstash output only, it runs well - no uptime monitors found.
  7. Did the same above steps, but also tried to rewrite “heartbeat.monitors” part from the scratch in “heartbeat.yml”, it runs well – no uptime monitors found.
  8. Did the same above steps, but deleted “heartbeat.monitors” part in “heartbeat.yml” file, and added a new yml file in “/etc/heartbeat/monitors.d” directory. It runs well – no uptime monitors found.

When you did the reset steps, did you ensure to stop heartbeat before deleting all the indices? Those steps can work if your mapping gets corrupted, but its vitally important that heartbeat be stopped before deleting the ES indices etc.

I'll just add, the reason is that a running heartbeat will trigger ES to auto-create indices with incorrect mappings, that's why they must all be stopped.

yes, of course. I stopped heartbeat every time when I deleted indices. Moreover, mapping and indices were fine when heartbeat was running, but still doesn't show uptime of any endpoints there.

I do think this is most likely something that would be fixed by upgrading. We generally don't backport fixes to past minors. It might be worth setting up a test cluster running the latest stack version to validate that, and using that to persuade your company to upgrade.

Alright, but I also want to make sure, is there any workaround especially for those versions except upgrading and all mentioned above?

The bug I'm thinking of was a query issue on the Kibana side without a workaround. I'm having trouble digging up the ticket at the moment, but I'd recommend upgrading. If that doesn't fix the issue we're glad to debug further.