Heartbeat.7.5.2 works but no details under 'monitor status'

Hello.

I'm trying to setup heartbeat.7.5.2 and able to do so; could find the HTTP/ TCP end-points status on Kibana (under 'Uptime'). However, it doesnt show up the name/status/url; despite i've explicitly added 'name & id' in the tcp.yml.

Ex:

  • type: tcp # monitor type tcp. Connect via TCP and optionally verify endpoint
    schedule: '@every 5s' # every 5 seconds from start of beat
    hosts: ["myad.domain.com"]
    name: 'ad-service'
    id: 'ad-service'
    ports: [636]
    ipv4: true
    mode: any

I expect 'ad-service' be listed in the uptime console, but it just shows the pie-chart/histogram as 1/1.
Can I know what's missing that name/id are listed under 'monitor status' in the same console ?

Note: I'm not using any processors/tags.

Can you check if there are any documents stored in the ES? You should receive some data.

Yes, I could see data flowing in and able to search it in index. Moreover, as you could see the uptime reports the monitors as "3", but just that it doesn't list under "monitor status"

Do you have any query parameter in the URL? and what version of kibana are you using?

Kibana the same version as heartbeat, both 7.5.2.

I'm not doing any query, as you notice in the Uptime-UI, I'm finding only the monitors reported as numbers, but not listing the names (generally it should, but I'm assuming something is missing in heartbeat config)

Yes, this is really weird. It might be useful if you can post your masked config file.

<</etc/heartbeat/heartbeat.yml>>

heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: true
  reload.period: 5s

setup.template.enabled: false
setup.template.name: "heartbeat"
setup.template.pattern: "heartbeat-*"
setup.template.overwrite: true
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.template.append_fields:
- name: test.name
  type: keyword
- name: test.hostname
  type: long

setup.kibana:
  host: "https://kibana.domain.com:5601"
  protocol: "https"
  ssl.enabled: true
  ssl.certificate_authorities: ["/etc/heartbeat/ssl/ca.pem"]
  ssl.certificate: "/etc/heartbeat/ssl/heartbeat.crt"
  ssl.key: "/etc/heartbeat/ssl/heartbeat.key"

output.elasticsearch:
  enabled: true
  hosts: ["elasticsearch.domain:9200"]
  index: "heartbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
  protocol: "https"
  username: "heartbeat_writer"
  password: "xxxxxx"
  ssl.enabled: true
  ssl.certificate_authorities: ["/etc/heartbeat/ssl/ca.pem"]
  ssl.certificate: "/etc/heartbeat/ssl/heartbeat.crt"
  ssl.key: "/etc/heartbeat/ssl/heartbeat.key"

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/heartbeat
  name: heartbeat
  keepfiles: 7
  permissions: 0644

Btw here is one demo config you can try https://github.com/elastic/uptime-contrib/blob/master/testing/configs/demo.yml

Same is what I have, just that I'm using monitors.d instead of heartbeat.yml (to avoid frequent service restart).

Moreover, I don't have any processors, especially geo/cloud-meta; which I don't require. Besides, by default I could search the common values (ex: monitor.status, monitor.name) in the index, but not listing as shown in the pic.

Hello @sivaaws Can you go to dev tool and do query like this, and send me result back, it will help me determine, what kind of docs are coming into es, and whether there are any fields missing. Also try increasing monitor schedule to perhaps 60s, and see if that resolves the issue.

GET heartbeat-*/_search
{
"size": 2,
"query": {
"match_all": {}
}
}