In order to get my Kibana https heartbeat working, I had to add the standard ssl blob. I used the same blob that I have in all my .yml's that lets everything successfully (mostly) talk over https:
But, that ended up causing the heartbeat to stop parsing that set of configs, instead spewing logs of:
2021-11-22T16:17:05.278-0800 ERROR [reload] cfgfile/list.go:69 Unable to hash given config: missing field accessing '0.ssl' (source:'/blahblah/elastic-configs/heartbeat/heartbeat-monitors/kibana.http.yml')
That error message was not informative beyond "something is wrong with the ssl section"... Some googling found a few other people with similar "unable to hash" errors, so I tried a few of the solutions.
The one that worked was replacing ${path.config}/Elasticsearch-ca.pem with the full hardcoded path to that file.
That works, as a temporary measure. But since that path isn't the same across all machines (and I use common config files that get synced to maintain my sanity) it isn't a good long term solution.
The common beat variables need to be properly parsed when they are in the module-specific ymls
Could you please share all of your configuration files? I have tried to reproduce your issue with a minimal configuration, but for me it is working as expected.
- type: http
id: my-kibana-http-monitor
name: My Kibana HTTP Monitor
schedule: '@every 10m' # every 5 seconds from start of beat
hosts: ["https://kibana.hostname:5601"]
ipv4: true
mode: any
ssl.enabled: true
ssl:
certificate_authorities: ["/foo/elastic-configs/heartbeat/elasticsearch-ca.pem"]
verification_mode: "none"
And the only change required to break/fix the errors (which of course totally block that monitor from working, though naturally the server ping module that doesn't require any kind of ssl continues to work through) is flopping between:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.