I have setup heartbeat-7.5.1-1 on a Centos 7 VM with 6 vCPUs and 4Gb of memory.
This is the heartbeat.yml config I am using:
heartbeat.config.monitors:
path: ${path.config}/monitors.d/**/*.yml
reload.enabled: true
reload.period: 1m
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
name: name-01
max_procs: 18
output.logstash:
hosts: ["host-01"]
worker: 6
pipelining: 2
ssl:
enabled: true
processors:
- add_observer_metadata:
geo:
name: location-01
logging.selectors: ["*"]
monitoring.enabled: true
monitoring.cluster_uuid:
monitoring.elasticsearch:
hosts: ["host1:9200", "host2:9200", "host3:9200"]
protocol: "https"
ssl.verification_mode: none
http.enabled: true
http.port: 5066
I am monitoring 140 http endpoints using simple configs similar to:
- check.response:
body: pong
status: 200
enabled: true
ipv4: true
ipv6: false
mode: any
name: name-01
schedule: '@every 1m'
ssl:
supported_protocols:
- TLSv1.0
- TLSv1.1
- TLSv1.2
timeout: 30s
type: http
urls:
- url-01
I am having troubles with heartbeat not being able to keep up and not pusing all events into logstash. The heartbeat process consumes all the memory available and eventually runs out of swap space and crashes altogether.
Is this an issue of not having enough resources for that many monitors? Or is it a misconfiguration on my part and if so what should I be adding to heartbeat.yml to get better performance?