TLS for Filebeat Kafka Output

(Ane Fassa) #1

Has anyone been successful configuring the Filebeat Kafka output to use TLS and client/server certificates to connect to Kafka? I am able to use SSL to connect to the same Kafka cluster from logstash and also other clients but when trying to connect from Filebeat I keep getting this error:

ERR Kafka connect fails with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Here is a snippet of my yml config file:

      ### Kafka output
        hosts: ["vm1:9093","vm2:9093","vm3:9093"]
          certificate_authorities: ["/fs/opt/filebeat/config/testcachain.cer"]
          certificate: "/fs/opt/filebeat/config/mycert.cert.pem"
          certificate_key: "/fs/opt/filebeat/config/mycert.key.pem"
          topic: my_topic

Filebeat works if I connect to the same Kafka cluster on the clear port.

Note that the Kafka server cert is signed by an intermediate and a root CA and I have added both certs into testcachain.cer

Any help would be much appreciated.

(Steffen Siering) #2

which kafka version are you testing with?

How did you create the certificates? Did you try with self-signed certificate?

You using client authentication?

Anything in kafka logs?

(Ane Fassa) #3

My Kafka cluster runs on Kafka 0.10

I requested the certificate for both servers and client from our internal Certificate Authority. I have not tried using self-signed certificate.

At this point my Kafka cluster is configured to accept any valid client certificated trusted by its trusted CAs. I have successfully used the same client certificate that I am using with filebeat to authenticate the console-consumer client and the logstash client against the Kafka cluster.

I have looked through the Kafka logs and don't see any messages at all.

Also on the filebeat side I see on netstat the attempt to establish the connection
tcp 0 162 filebeat-client-vm:57750 vm-kafka-002:9093 ESTABLISHED

but the connection goes away shortly after.

(Steffen Siering) #4

Hm, I didn't try with SSL based client authentication yet. So far using kafka with SSL server certificate (with root + intermediate CAs) did work great for me.

Anything else in beats logs regarding kafka output. The client has run out of available brokers to talk to normally appears after some network problems have been occured. There might be some logs more early including a reason.

Unfortunately I can not tell if problem is due to server certificate or due to client authentication. Can you run a test without client authentication and see if beats can connect properly to kafka?

(Steffen Siering) #5

I just did try with client authentication without any problems.

In my setup I've got a root ca (named: ca) and an intermediate ca (named sign-ca) signing actual certificates used by client and server.

For beats I'm using PEM format and for kafka jks based key store.

The trustchain for beats is build by executing (watch order):

	$ cat certs/sign-ca.cert.pem certs/ca.cert.pem > certs/trustchain.cert.pem

The trustchain for kafka is build by running:

	# add ca certificate to trustchain.jks
	$ keytool -importcert -file certs/ca.cert.pem -alias ca -noprompt -storepass "${PASS}" -keystore certs/trustchain.jks
	# add sign-ca certificate to certs/trustchain.jks
	$ keytool -importcert -file certs/sign-ca.cert.pem -alias 'sign-ca' -noprompt -storepass "${PASS}" -keystore certs/trustchain.jks

The trustchain.(cert.pem/jks) is required for validation. That's why these files do only contain the public certificates of the root CA and the intermediate CA.

Having server and client certificates (both signed by sign-ca) + private keys, next I configure kafka to enable SSL server certificate (no client authentication yet):

kafka server conf:


The file localhost.jks contains the server certificate + private key only imported from PEM files (generated with openssl) using keytool:

	# create pkcs12 store
	$ openssl pkcs12 -export -out private/localhost.pkcs12 -in certs/localhost.cert.pem -inkey private/localhost.key.pem -passout "pass:password"
	# import server key into jks keystore
	$ echo "password" | keytool -importkeystore -srckeystore private/localhost.pkcs12 -destkeystore private/localhost.jks -srcstoretype pkcs12 -storepass "password"

Starting kafka I check SSL validation working via openssl:

$ openssl s_client -connect localhost:9093 -tls1_2 -CAfile certs/trustchain.cert.pem

if openssl does quit something went wrong.

Next enable client authentication + add trustchain.jks for client certificate validation in kafka:



And check once again with openssl:

$ openssl s_client -connect localhost:9093 -tls1_2 -CAfile certs/trustchain.cert.pem -cert certs/client.localhost.cert.pem -key private/client.localhost.key.pem
$ echo $?

Note: the client its private key is unencrypted, hence no passsword. This PR adds support for encrypted private keys.

If server certificate succeeds, but client authentication fails openssl will quit immediately with exit code 1, but no error will be written. Unfortunately kafka didn't log any error for me too. In this case something's wrong with the certificates or your setup.

Next let's configure filebeat:


    hosts: ["localhost:9093"]
      certificate: ../ssl/certs/client.localhost.cert.pem
      certificate_key: ../ssl/private/client.localhost.key.pem
        - ../ssl/certs/trustchain.cert.pem

Using this setup, filebeat+kafka do indeed mutual SSL based authentication.

Setting up filebeat without client certificate, kafka output fails with:

2016/08/28 15:11:12.529914 log.go:12: WARN Failed to connect to broker localhost:9093: local error: tls: no renegotiation
2016/08/28 15:11:12.529939 log.go:16: WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA *net.OpError=local error: tls: no renegotiation)
2016/08/28 15:11:12.529947 log.go:16: WARN kafka message: client/metadata no available broker to send metadata request to


[I opened a new topic]

(Steffen Siering) #7

Hi @Tsury

please open another topic instead of hijacking this one. Then we can iterate/fix it without creating confusing dialogs between users potentially running into slightly different problems.



Done, thanks.

(Ane Fassa) #9


Thanks a bunch. I am now able to connect from filebeat over SSL with certificate based client authentication.

Turns out my setup had a couple issues:
1- I hadn't realized that my client cert had the same root CA but a different intermediate CA from my server cert. The intermediate CA for the client cert had not been added to the server's truststore. In addition when exporting the PKCS12 to pem I had failed to include the whole chain so client authentication was failing because it was not able to verify the chain.
Since the client java keystore (JKS) used by the console-consumer had the whole chain and the client cert's root CA was in the server's truststore, I was able to connect with the console-consumer presenting the client.jks all along, even when the intermediate client CA was not in the server's truststore.

2- My private key had to be unencrypted.

For the first issue above, implementing either one of the following allowed filebeat to connect to Kafka over SSL with SSL based client authentication:
1- Add intermediate client CA to server's truststore
2- Export a new pem client cert with the whole certificate chain

I will try to capture some of the steps that I went through in case it may help others to get this working.

Exporting unencrypted private key from PKCS12 into pem
openssl pkcs12 -in clientcert.p12 -nocerts -nodes -out client-unenc.key.pem

Export public key and whole certificate chain from PKCS12 into pem
openssl pkcs12 -in clientcert.p12 -nokeys -out clientcert-WChain.cert.pem

Configuring TLS for Kafka output in filebeat

hosts: ["vm1:9093","vm2:9093","vm3:9093"]
certificate: /fs/opt/filebeat/config/clientcert-WChain.cert.pem
certificate_key: /fs/opt/filebeat/config/client-unenc.key.pem
certificate_authorities: /fs/opt/filebeat/config/testcachain.cer

Checking the actual exit code for openssl as suggested helped as well since the openssl output was confusing. When client authentication was failing I was getting a not very useful error

140524904019784:error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message:s3_both.c:491:

But everything else in openssl looked fine including the last line:

Verify return code: 0 (ok)

So checking $? 0 for success and 1 for failure gives a more straightforward success or failure.

Something else to note is that openssl seems to disregard the chain in the client certificate and instead only use CA certificates in CAfile to verify or build the chain. So in the case that I did not add the client intermediate CA to the server's truststore, in order to have openssl connect successfully with client authentication, I had to add the client intermediate CA cert to the CAfile that was provided to openssl,
So I had a trust pem that looks like this

cat server-intermediateCA.cert.pem server_client-RootCA.cert.pem client-intermediateCA.cert.pem > openssl-trust.cert.pem

And openssl command as:

echo QUIT | openssl s_client -connect vm1:9093 -key /fs/opt/filebeat/config/client-unenc.key.pem -cert /fs/opt/filebeat/config/clientcert-WChain.cert.pem -CAfile /fs/opt/filebeat/config/openssl-trust.cert.pem
echo $?

Thanks again for helping me to get SSL server and client authentication working for filebeat Kafka output.


(Steffen Siering) #10

Great you got this working.

btw. in upcoming beta1 release the tls section has been renamed to ssl + some other ssl configs changed slightly. This release also adds support for encrypted key files (configure key_passphrase).

(Ane Fassa) #11

One more question... Does filebeat support the use of SAN's in the server certificate? I am trying to use the same server certificate for all my Kafka servers by specifying SAN like:,,,,

But with this setup so far I can only get filebeat to connect to the server whose DNS name is on the CN.

(Steffen Siering) #12

TLS is based upon golang tls support as provided. While I haven't tried SAN for domains, I would assume it would be supported. At least the loaded certificates type used in code contains an DNSNames array with list of known domains.

Alternatively I'd use one kafka signing certificate (used to configure beats + sign kafka server certs) and create one certificate per kafka broker on startup (e.g. like let's encrypt using the ACME protocol). Sure, this means another service, but adding new kafka nodes seems more straight-forward. Plus (while not yet supported in beats or LS) one can implement certificate revokation via CLR or OCSP if some certificates are found to be compromised. Well, just creating new certificates and reconfiguring all services might do the trick too, with less complexity required (only services will be unavailable then + do not miss any service).

(system) #13

This topic was automatically closed after 21 days. New replies are no longer allowed.