Kibana connecting to IPs not FQDN

Hey there,

Setting up some security on a test cluster here and it is driving me a bit nuts.
I can see in my kibana log that it is doing this

Line 1359: {"type":"log","@timestamp":"2019-10-18T10:01:31Z","tags":["error","elasticsearch","admin"],"pid":7431,"message":"Request error, retrying\nGET https://10.0.10.70:9200/.kibana/doc/config%3A6.8.0 => Hostname/IP does not match certificate's altnames: IP: 10.0.10.70 is not in the cert's list: "}

In my kibana.yml I have the hosts specified with FQDN and the same in the Elastic hosts yml file, so why is it doing this? Of course the IP is not in the certificate I'm using, but the host name is, it is a *.domain.com certificate so the hosts should be ok. Strange thing is when Kibana starts up, I can access the hosts just fine, it is only after a few minutes they stop working and I can see this in the log file.

My kibana.yml

server.port: 5601
server.host: FQDN
server.name: FQDN
elasticsearch.hosts:
https://host1FQDN:9200
https://host2FQDN:9200
elasticsearch.sniffInterval: 60000
elasticsearch.sniffOnConnectionFault: true
elasticsearch.sniffOnStart: false
elasticsearch.preserveHost: true
kibana.index: ".kibana"
elasticsearch.username: "kibana"
elasticsearch.password: "password"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certificate
server.ssl.key: /etc/kibana/key
elasticsearch.ssl.certificate: /etc/kibana/certificate
elasticsearch.ssl.key: /etc/kibana/key
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/CA_bundle.crt" ]
elasticsearch.ssl.verificationMode: full

Hope someone has an idea

Best regards
Thomas

If you access the ES host(the one in the error message, so 10.0.10.70) with the FQDN using cURL, do you get the same error?

if i try to connect with

curl -u kibana:password https://10.0.10.70:9200/_xpack?pretty -verbose

Then I get

curl: (51) SSL: certificate subject name (*.domain.com) does not match target host name '10.0.10.70'

I have the Certificate CA stored in /etc/ssl/certs on my Ubuntu server
So yeah same kind of error, so maybe the problem is originating from the Elastic nodes, I have some other problems with them also, so I have another post with my elasticsearch.yml in here

Think that the issue originates there, as I can see from my "_nodes/_all/settings?pretty"
that is says

"transport_address" : "10.0.10.70:9300",
"host" : "10.0.10.70",

those should be the FQDN. Now I just need to figure out why and get it changed.

Best regards
Thomas

What does the elasticsearch.yml look like on the 10.0.10.70 node?

It can take a little bit of magic to get Elasticsearch to publish FQDNs rather than IPs, so it will be easier to give you advice if we can see your starting point.

Hi Tim,

I also have some other issues with Elastic, so I have posted my yml file under Elastic here

hope you can make some sense out of all of it.

Best regards
Thomas

network.host: [_site_, _local_]

You're telling ES to bind to site-local IP address(es) of the host on which it is running. It will therefore adverstise itself as running on an IP address and not a FQDN.

When you enable node sniffing, Kibana finds out about nodes that exist according to that address that they advertise. Since your node advertises an IP, Kibana connects via an IP and expects the certificate to make that IP.

If you're using _site_ addresses and TLS certs that use DNS names, then you typically want to set network.publish_host to the FQDN that matches your cert.

Thanks for the explanation, just tried to change the settings on my Elastic hosts

network.host: [_site_]
network.publish_host: "esdata.domain.com"

But after restarting my two elastic data hosts I can't connect to them with the same user at I could before!?

Best regards
Thomas

Check the elasticsearch logs, my guess is that your nodes can't reach one another if they use that publish_host.

yes that looks to be what is happening, but confused about how to make them be able to connect to eachother again, because "discovery.zen.ping.unicast.hosts" is set to the two hosts I have with FQDN - this has always worked fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.