I'm trying to enable dashboards in Kibana and, in the Filebeat log, I am getting:
ERROR instance/beat.go:743 Exiting: Error importing Kibana dashboards: \
fail to create the Kibana loader: \
Error creating Kibana client: Error creating Kibana client: \
fail to get the Kibana version: \
HTTP GET request to /api/status fails: \
fail to execute the HTTP GET request: \
Get http://elk-host:5601/api/status: dial tcp 10.0.1.174:5601: \
connect: connection refused.
I run an ELK stack (sebp/elk) on one host and Filebeat on one or more remote nodes. I have a certificate/key between Filebeat and Logstash set up. Communication works perfectly and I get logs. Here's the Filebeat side of that:
But, I don't know how to configure Kibana reciprocally to make use of this certificate. I've studied kibana.yml, but none of the certificate settings appear relevant to what I'm trying to do.
connect: connection refused suggests to me that it's not a certificate error, but a more general networking error and that the beat is unable to connect to elk-host:5601 at all. Are you running in containers? Are you sure that elk-host:5601 is accessible from within the container the beat is running in?
Many thanks for getting back to me. This got lost in the holiday shuffle.
Yes, I'm running in containers (Filebeat in its own; the rest of the ELK stack in their own container). I assumed that elk-host:5601 can be connected to by Filebeat running in its container because it's already working perfectly to send log entries to Logstash via elk-host:5044. Maybe there's something about Filebeat's use of the Kibana API that I don't understand and the hostname isn't resolved by Docker DNS whereas Filebeat's configured output.logstash does somehow get this treatment?
Well, back from the holidays, this is what I now think. I hope this comment helps someone else.
Especially since our multiple Filebeat containers are running potentially (very likely) on different hosts, we don't want to use filebeat.yml to install dashboards in Kibana. Instead, we'll do it in our greater, ELK container. Right now, for instance, we're using the saved-object API in Kibana to preinstall our index pattern ("filebeat-*") and we'll find a similar solution for any dashboard we choose to deploy:
This works; we don't need keys and certificates, etc., though in terms of automatic installation, I don't yet know if we're going to have to jury-rig it using ENTRYPOINT in Dockerfile or find some other way to set it off after Kibana's API is up.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.