Certificates between Filebeat and Kibana for Filebeat to do setup.dashboards.enabled: true

I'm trying to enable dashboards in Kibana and, in the Filebeat log, I am getting:

ERROR instance/beat.go:743 Exiting: Error importing Kibana dashboards: \
    fail to create the Kibana loader: \
    Error creating Kibana client: Error creating Kibana client: \
    fail to get the Kibana version: \
    HTTP GET request to /api/status fails: \
    fail to execute the HTTP GET request: \
    Get http://elk-host:5601/api/status: dial tcp 10.0.1.174:5601: \
    connect: connection refused.

I run an ELK stack (sebp/elk) on one host and Filebeat on one or more remote nodes. I have a certificate/key between Filebeat and Logstash set up. Communication works perfectly and I get logs. Here's the Filebeat side of that:

output.logstash:
  hosts: [ "elk-host:5044" ]
  ssl.enabled: true
  ssl.certificate: "/etc/pki/tls/certs/logstash-beats.crt"
  ssl.key:         "/etc/pki/tls/private/logstash-beats.key"

And the Logstash side:

input
{
  beats
  {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key         => "/etc/pki/tls/private/logstash-beats.key"
  }
}

Now I'm starting to try to use Filebeat configuration to enable dashboards in Kibana. With just this

setup.dashboards.enabled: true

I get the error reported above. At the end of filebeat.yml I want to add this to sort out the problem:

setup.kibana.protocol: "https"
setup.kibana.host: "elk-host:5601"
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate: "/etc/pki/tls/certs/kibana-beats.crt"
setup.kibana.ssl.key:         "/etc/pki/tls/private/kibana-beats.key"

But, I don't know how to configure Kibana reciprocally to make use of this certificate. I've studied kibana.yml, but none of the certificate settings appear relevant to what I'm trying to do.

How is this wiring done?

connect: connection refused suggests to me that it's not a certificate error, but a more general networking error and that the beat is unable to connect to elk-host:5601 at all. Are you running in containers? Are you sure that elk-host:5601 is accessible from within the container the beat is running in?

Many thanks for getting back to me. This got lost in the holiday shuffle.

Yes, I'm running in containers (Filebeat in its own; the rest of the ELK stack in their own container). I assumed that elk-host:5601 can be connected to by Filebeat running in its container because it's already working perfectly to send log entries to Logstash via elk-host:5044. Maybe there's something about Filebeat's use of the Kibana API that I don't understand and the hostname isn't resolved by Docker DNS whereas Filebeat's configured output.logstash does somehow get this treatment?

Well, back from the holidays, this is what I now think. I hope this comment helps someone else.

Especially since our multiple Filebeat containers are running potentially (very likely) on different hosts, we don't want to use filebeat.yml to install dashboards in Kibana. Instead, we'll do it in our greater, ELK container. Right now, for instance, we're using the saved-object API in Kibana to preinstall our index pattern ("filebeat-*") and we'll find a similar solution for any dashboard we choose to deploy:

preinstall-index-pattern.sh :

#!/bin/sh
# Preinstall index pattern "filebeat-*" for Kibana's use:
curl -X POST \
  "http://localhost:5601/api/saved_objects/index-pattern/filebeat-pattern" \
  --header 'kbn-xsrf: true' \
  --header 'Content-Type: application/json' \
  --data '\
    {\
      "attributes" :\
      {\
        "title"         : "filebeat-*",\
        "timeFieldName" : "@timestamp",\
        "notExpandable" : true\
      }\
    }'

This works; we don't need keys and certificates, etc., though in terms of automatic installation, I don't yet know if we're going to have to jury-rig it using ENTRYPOINT in Dockerfile or find some other way to set it off after Kibana's API is up.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.