Error Return code 21: SSL encryption between filebeat 5.2 and kafka 1.0 (self signed)

So something else where requires that you have the ONSL listener. So three questions...

  1. Did kafka start? I think it did.
  2. What did filebeat do once kafka came up?
  3. Can you list both listeners with INSL first? That may break other stuff, but could get us to a good understanding of what the problem is

Hi,

Did kafka start? I think it did. - Yes
What did filebeat do once kafka came up? - tls: first record does not look like a TLS handshake
Can you list both listeners with INSL first? That may break other stuff, but could get us to a good understanding of what the problem is

Config

# advertised listener part
#advertised.listeners=INSL://<IP>:9093
advertised.listeners=INSL://<IP>:9092,INSL://<IP>:9093
inter.broker.listener.name=INSL
listener.security.protocol.map=INSL:SSL,ONSL:PLAINTEXT

Error

[2018-02-09 16:06:05,668] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: Each listener must have a different name, listeners: INSL://172.16.0.110:9092,INSL://172.16.0.110:9093

and kafka process stopped

Try

advertised.listeners=INSL://<IP>:9093,ONSL://<IP>:9092

Tried that, kafka didn't throw any error but filebeat has same tls: first record does not look like a TLS handshake

Also when I generated the kafka cert I had my ssl config as

cat > cert_info_kafka << EOF
[req]
default_bits = 2048
prompt = no
default_md = sha512
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C=US
ST=xx
L=xx
O=xxx
OU=xx
emailAddress=xx@xx.com
CN = kafka

[ req_ext ]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names

[ alt_names ]
IP.1 = <kafka IP>

[ usr_cert ]
# Extensions for server certificates.
basicConstraints = **CA:FALSE**
nsCertType = client, server
nsComment = "OpenSSL Kafka Server / Client Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = **critical, digitalSignature, keyEncipherment, keyAgreement, nonRepudiation**
extendedKeyUsage = serverAuth, clientAuth

EOF

but shouldn't CA be true instead. Am thinking if I made any mistake during cert generation. Any thoughts? I was hoping to use SAN so I can list all kafka IP while generating cert and use that one cert across cluster

I know nothing about generating certificates, our corporate CA handles that for me, but in post 6 of this thread Steffens provided links to documentation. Before you worry about SANs, just generate a name-matched cert for your kafka server, exactly following the documented steps. Get the most basic use-case working and worry about extras afterwards.

Thanks so much Badger and everyone. I will check more details and update the thread with my further findings

Hi Badger, Steffen, Andrew

I made some progress over weekened, apparently it looked like I messed up something while generating my self-signed CA. I am able to send metrics from filebeat to kafka with ssl enabled on both kafka and filebeat end with ssl.verification_mode: none, but however when I comment that setting I see a different issue

Working config

Filebeat

Kafka output

output.kafka:
  hosts: ["<kafka IP>:9093"]
  topic: '%{[type]}'
  ssl.certificate_authorities: ["/etc/kafka-fb.pem"]
  ssl.verification_mode: none
  compression: gzip

Kafka config

listeners=ONSL://<kafka IP>:9092,INSL://<kafka IP>:9093


# advertised listener part
advertised.listeners=ONSL://<kafka IP>:9092,INSL://<kafka IP>:9093
inter.broker.listener.name=INSL
listener.security.protocol.map=INSL:SSL,ONSL:PLAINTEXT


# ssl encryption config from docs
#ssl.keystore.location=/opt/obuildfactory/jdk-1.8.0-openjdk-x86_64/jre/bin/kafka_certs/server.keystore.jks
#ssl.keystore.password=<password>
ssl.key.password=<password>
ssl.truststore.location=/opt/obuildfactory/jdk-1.8.0-openjdk-x86_64/jre/bin/kafka_certs/client.truststore.jks
ssl.truststore.password=<password>

From the docs, ssl.verification_mode: none setting is not advisable and when I don't use that in filebeat config the logs show

2018-02-12T17:40:31Z WARN kafka message: Initializing new client
2018-02-12T17:40:31Z WARN client/metadata fetching metadata for all topics from broker 172.16.0.110:9093

2018-02-12T17:40:32Z WARN Failed to connect to broker <kafka IP>:9093: x509: cannot validate certificate for 172.16.0.110 because it doesn't contain any IP SANs

2018-02-12T17:40:32Z WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA x509.HostnameError=x509: cannot validate certificate for <kafka IP>because it doesn't contain any IP SANs)
2018-02-12T17:40:32Z WARN kafka message: client/metadata no available broker to send metadata request to
2018-02-12T17:40:32Z WARN client/brokers resurrecting 1 dead seed brokers
2018-02-12T17:40:32Z WARN client/metadata retrying after 250ms... (3 attempts remaining)

2018-02-12T17:40:32Z WARN client/metadata fetching metadata for all topics from broker <kafka IP>:9093

I use IP's because we don't have DNS resolution in our pre-prod environments and in my dev setup which I am trying things out. So I was using IP SAN's while generating certificate. Is there something that can be done to get it working without ssl.verification_mode: none set. Please advise.

Regards,
Gangadhar

Go through the certificate generation process again, but this time include the IP SANs.

I did it the previous time, however when I check the pem file contents it doesn't have any SAN details. Let me re-try generating CA. Thanks!

Hi Badger,

I re-generated a kafka CA and kafka keystore,truststore needed and converted the cert to pem file to use it in filebeat. Still no luck.

Below is the process I used to generate CA. Please let me know if I missed something

CA config file
cat > updated_cert_info_kafka << EOF
[req]
default_bits = 2048
prompt = no
default_md = sha512
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C=US
ST=Texas
L=Dallas
O=<val>
OU=<val>
emailAddress=<val>
CN = kafka

[ req_ext ]
basicConstraints = CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyCertSign, cRLSign
subjectAltName = @alt_names

[ alt_names ]
IP.1 = <kafka IP>

EOF

From kafka docs

keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA

keytool -list -v -keystore server.keystore.jks

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 -config updated_cert_info_kafka

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:{ca-password}

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
openssl x509 -in ca-cert -out mycert.pem -outform PEM

This is the certificate details

openssl x509 -text -noout -in

Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number:
            <serial number>
    Signature Algorithm: sha512WithRSAEncryption
        Issuer: C=US, ST=Texas, L=Dallas, O=<val>, OU=<val>/emailAddress=<val>, CN=kafka
        Validity
            Not Before: Feb  9 23:21:38 2018 GMT
            Not After : Feb  9 23:21:38 2019 GMT
        Subject: C=US, ST=Texas, L=Dallas, O=<val>, OU=<val>/emailAddress=<val>, CN=kafka
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    <key>
                Exponent: 65537 (0x10001)
    Signature Algorithm: sha512WithRSAEncryption
        <key>

Filebeat error

2018-02-12T19:31:24Z WARN Failed to connect to broker 172.16.0.110:9093: x509: cannot validate certificate for 172.16.0.110 because it doesn't contain any IP SANs

2018-02-12T19:31:24Z WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA x509.HostnameError=x509: cannot validate certificate for 172.16.0.110 because it doesn't contain any IP SANs)

Kindly advise

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.