Certificates and keys for Kibana and Logstash with X-Pack

Hello there,

I'm setting up the ELK security using X-Pack, I generated the CA and Certs as suggested by the docs:

bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

Shipped them to all the elasticseach nodes and worked fine.

On Kibana, per the document:
https://www.elastic.co/guide/en/kibana/6.3/configuring-tls.html

It says:

" Generate a server certificate for Kibana.

You must either set the certificate’s subjectAltName to the hostname, fully-qualified domain name (FQDN), or IP address of the Kibana server, or set the CN to the Kibana server’s hostname or FQDN. Using the server’s IP address as the CN does not work."

My question is, how to generate this server certificate for Kibana? use the same tool on elasticsearch, elasticsearch-certutil ? could you please let me know how to use this tool to generate kibana server certificate?

Also, for logstash pipeline output to elasticsearch, what should we put in for "cacert =>"?

I kept getting for following errors on logstash.log.... something to do with the setting for "cacert", in the pipeline. Please help.

[2018-09-28T20:07:08,459][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>".monitoring-logstash", :plugin=>"#LogStash::OutputDelegator:0x699a3513", :error=>"signed fields invalid", :thread=>"#<Thread:0x75e04fbd run>"}

[2018-09-28T20:07:08,461][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#LogStash::OutputDelegator:0x4e79c8d8", :error=>"signed fields invalid", :thread=>"#<Thread:0x2d9111c8@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:48 run>"}

Thanks a lot in adance

Li

My question is, how to generate this server certificate for Kibana?

This is slightly different. Users will access Kibana via their browser so the certificate that Kibana will use for https needs to be one that the browsers can trust. This usually means that you generate a CSR ( certificate signing request) and have it signed by a trusted (public or corporate) CA. You can use elasticsearch-certutil to create a CSR, see elasticsearch-certutil | Elasticsearch Guide [8.11] | Elastic

i.e

bin/elasticsearch-certutil csr --dns kibana.example.com 

Now, under certain circumstances ( i.e. if the number of users accessing Kibana is small, you can control the trust anchors in the users browsers or OSes, testing reasons, etc. ) you might want to use self signed certificates or certificates signed by a CA that the browsers do not trust by default. Keep in mind that this will cause the browser to show a warning.

You can use elasticsearch-certutil to create a server certificate for Kibana, but Kibana doesn't yet support the PKCS#12 format so you'd need to create a PEM encoded key and certificate (by specifying the --pem parameter). An example invocation would be:

bin/elasticsearch-certutil cert --pem -ca path/to/your.p12 --dns kibana.example.com

Also, for logstash pipeline output to elasticsearch, what should we put in for "cacert =>"?

You need to set the CA cert file that you have created with certutil. However, Elasticsearch output Logstash plugin doesn't support PKCS#12 format so you would need to export the CA certificate in PEM format as such :

openssl pkcs12 -in ca.p12 -clcerts -nokeys -chain -out ca.pem

and use that as the value of cacert

1 Like

Thank you, I will try and see how it goes... this is very helpful.

I regenerated the keys can certs... also, converted to .pem as suggested above...

It seems Kibana works fine, but on logstash, I used the ca.pem file for cacert. Now I got the following error in logstash.log and logstash is not pulling any data using any of the beats

[2018-10-03T04:44:13,747][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Host name 'elastichostname' does not match the certificate subject provided by the peer (CN=instance)>.

//////////////

Here is the logstash pipeline conf:

output {
elasticsearch {
user => "logstash_ingest"
password => "changeme"
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/keys/elastic-stack-ca.pem"
action => "index"
hosts => ["elasticnodehostname"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

////////////////////////

Here is logstash yml:

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["elasticnodehostname:9200"]
xpack.monitoring.elasticsearch.ssl.ca: "/etc/logstash/keys/elastic-stack-ca.pem"
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 60s
/////////////////

Please let me know what I missed or did wrong....

Thanks a lot

Li

Hi

The issue is that Logstash's Elasticsearch output plugin performs validation of the Certificate that Elasticsearch uses for TLS, as instructed

ssl_certificate_verification => true

and it fails because

This is because of a control named hostname validation, i.e. a control that either the CN or one of the SANs that are included in an X509 certificate match the hostname of the host that uses that certificate for TLS.

In more concrete terms you have created your certificate with a SAN of XXXXX (presumably passing --dns XXXXX in the certutil command) but your Elasticsearch host uses a hostname of YYYYY ( check what you use in hosts => [] param in yourlogstash.yml. You need to make sure these two are match by changing one of them.

Hi Ikakavas,

Thank you for your patience...
I'm really stuck at logstash SSL setup.

I generated the certs and keys for Kibana and logstash (they are on the same host) like:

bin/elasticsearch-certutil cert --pem -ca path/to/your.p12 --dns
then, use openssl to get the ca.pem and use this pem for the value of "cacert".

I'm still getting the same error as below:

Error registering plugin {:pipeline_id=>".monitoring-logstash", :plugin=>"#LogStash::OutputDelegator:0x6a818f22", :error=>"Host name '' does not match the certificate subject provided by the peer (CN=instance)

Should we use logstash/kibane hostname to generate in "bin/elasticsearch-certutil cert --pem -ca path/to/your.p12 --dns ".

What should we put for the value of --dns, logstash hostname or elasticsearch hostname?

Thanks a lot

Li

Kibana is a server and requires a certificate to use for https when clients connect to it. This is what we were discussing above but why did you create a certificate and key for Logstash. The only reason you need this is to do TLS client authentication of Logstash to Elasticsearch but your logstash's Elasticsearch output plugin configuration shows you don't do that.

This is exactly the same error you encountered before and I explained above what that means:

When you use this command to generate a key and certificate for Kibana, then you need to use the hostname or FQDN of kibana. This is however irrelevant to your logstash problem, see again my answer with regards to the the kibana certificate and if there are any questions on that we can discuss in a separate answer.

Take a step back from the above and let's focus on your logstash issues. Logstash attempts to communicate to Elasticsearch over https on port 9200. Elasticsearch is configured for TLS on the http layer ( you never showed your config but I assume from the errors ) with:

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12 
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12 

This elastic-certificates.p12 contains the cert and the key that Elasticsearch uses for TLS on the http layer. Since you didn't provide a dns name when you ran the

bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

command, the certificate was created with a default subject of CN=instance .
For TLS, that means that when a client connects over https, Elasticsearch says "Hi, I'm CN=instance, this is my certificate"

Que to Logstash now. Same applies for monitoring and the Elasticsearch output plugin as your config is similar, but lets look at the output plugin as an example. You have it configured with

sl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/keys/elastic-stack-ca.pem"
action => "index"
hosts => ["elasticnodehostname"]

This tells the plugin to connect to https://elasticnodehostname:9200 and use etc/logstash/keys/elastic-stack-ca.pem to verify Elasticsearch's certificate. What happens is that the plugin connects to https://elasticnodehostname:9200 and Elasticsearch replies with "Hi, I'm CN=instance, this is my certificate". The plugin can verify the certificate's authenticity as it is signed by the /etc/logstash/keys/elastic-stack-ca.pem CA certificate, but hostname verification fails. The plugin connects to elasticnodehostname and Elasticsearch presents a certificate that says it is CN=instance.

Hope the above helps with understanding what the issue is.

To solve it, you need to make sure that the certificate that is included in the certs/elastic-certificates.p12 that Elasticsearch uses have a correct DNS SAN in it so that it matches its hostname/FQDN. So for example if your Elasticsearch is reached at https://my.elasticsearch.com:9200, recreate elastic-certificates.p12 with

bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns my.elasticsearch.com
2 Likes

Thank you very very much, indeed.
I followed your suggestion, and now things are a lot of better.

Here is what I did:

On Elasticsearch node 1:

  1. bin/elasticsearch-certutil ca
  2. bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns<elasticnode1.com>
  3. bin /elasticsearch-certutil cert --pem -ca /path to/elastic-stack-ca.p12 --dns
  4. openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys -chain -out elastic-stack-ca.pem
  5. Copied the certs/pem/crts to kibana and logstash node (they are co-located on the same server).
  6. Modified the kibnana.yml, it started fine
  7. Mofdified the logstash.yml and pipeline.conf.

Here I have an issue. I have 2 elasticserch nodes, node1 and node2, I generated all the certs and crts on node1. Now, in the logstash.yml, if I put 2 nodes for xpack.monitoring.elasticsearch.url, logstash will complain about the node2, says it can connect to node2. If I remove only put node1 for xpack.monitoring.elasticsearch.url, it will work fine...
I tried the set xpack.monitoring.elasticsearch.ssl.verification_mode to none, still the same
In the pipleline.conf for the value to 'Hosts", I can only use node1 there... cannot put node2 (didn't work for node2 either).

Here is the errors I got:

[2018-10-04T19:24:52,958][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://logstash_ingest:xxxxxx@elasticnode1.com:9200/"}
[2018-10-04T19:24:53,015][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-10-04T19:25:03,047][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://logstash_ingest:xxxxxx@elasticnode2.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://logstash_ingest:xxxxxx@elasticnode2.com:9200/][Manticore::ConnectTimeout] Read timed out"}
...
[2018-10-04T20:21:50,389][WARN ][logstash.outputs.elasticsearch] Error while performing resurrection {:error_message=>"Host name 'elasticnode2.com' does not match the certificate subject provided by the peer (CN=instance)", :class=>"Manticore::UnknownException", :backtrace=>

Here is the logstash.yml:

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: ["https://elasticnode1.com:9200", "https://elasticnode2.com:9200"]
#xpack.monitoring.elasticsearch.url: ["https://elasticnode1.com:9200"]
xpack.monitoring.elasticsearch.ssl.ca: "/etc/logstash/keys/elastic-stack-ca.pem"
xpack.monitoring.elasticsearch.ssl.verification_mode: none
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 60s
#xpack.monitoring.collection.pipeline.details.enabled: true

Here is the pipeline.conf:

output {
elasticsearch {
user => "logstash_system"
password => "changeme"
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/keys/elastic-stack-ca.pem"
action => "index"
hosts => ["elasticnode1.com","elasticnode2.com"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

Here is the elasticsearch.yml (both nodes are same):

discovery.zen.ping.unicast.hosts: ["elasticnode1.com", "elasticnode2.com"]
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 60s
xpack.monitoring.collection.cluster.stats.timeout: 60s
xpack.monitoring.history.duration: 90d
xpack.watcher.history.cleaner_service.enabled: true
xpack.http.proxy.host: 'ourproxyhostname.com'
xpack.http.proxy.port: 3128
xpack.watcher.enabled: true
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack:
security:
authc:
realms:
active_directory:
type: active_directory
order: 0
domain_name: xxx.yyy.com
files.role_mapping: /etc/elasticsearch/role_mapping.yml
bind_dn: CN=admin,CN=Users,DC=xxx,DC=yyy,DC=com
bind_password: password

The problem is that if elasticnode1 goes down, we will be losing connection between logstash and elasticsearch cluster (all the beats come in via the logstash). Could you please take a look and help?

Again, thank you very much for your help, indeed

Li

This, as we discussed above, creates a PKCS12 store that contains the certificate that Elasticsearch will use for TLS. Since each Elasticsearch node has a different hostname you need to do this on each node with the correct --dns parameter each time. Copy only the elastic-stack-ca.p12 to the second node and run

bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns elasticnode2.com 

on the second node. Then use that for setting

xpack.security.http.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12

instead of the elastic-certificates.p12 you had copied over from the first node.

Do you do this to create the Kibana certificate ? If so, as I have mentioned already above:

  1. You need to add something after --dns, and that is the hostname of Kibana, i.e. how your users will access it.
  2. I'm just reiterating that this will be a certificate signed by a local CA and your users browsers won't trust it (Every user will get a warning and they'll need to add a security exception just to reach kibana). If this is an issue for you, I have already explained how you can generate a CSR above and then you can use that to get a certificate from a Trusted Certificate Authority.

Thank you very much for your help. This indeed helped me a lot..
I think there will be more people facing the similar questions as I had, this post would help a lot...

Thanks a lot

Li

1 Like

Hello again,

I set up the SSL /TLS on logstash/elasticsearch and kibnana as indicated above.
Everything looks fine, all are up and running, and I can see the beats(file/metric/etc...) are sending data on Kibana (Discover) via logstash to elasticseach nodes.
However, I still can see the following errors in the logstash-plain.log as blow. It complains all elastic nodes but beats pipeline seems working fine.

[2018-10-09T11:21:02,810][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Host name 'elastiocnode1 IP' does not match the certificate subject provided by the peer (CN=instance)", :class=>"Manticore::UnknownException", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:incall'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in perform_request'" ... [2018-10-09T11:21:32,810][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Host name 'elastiocnode2 IP' does not match the certificate subject provided by the peer (CN=instance)", :class=>"Manticore::UnknownException", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:inblock in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-

...............

Here is the logstash.yml

============
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: ["https://elasticnode1:9200", "https://elasticnode2.hls.dxc.com:9200" ]
xpack.monitoring.elasticsearch.ssl.truststore.path: "/etc/logstash/elastic-certificates.p12"
xpack.monitoring.elasticsearch.ssl.truststore.password: password
xpack.monitoring.elasticsearch.ssl.keystore.path: "/etc/logstash/elastic-certificates.p12"
xpack.monitoring.elasticsearch.ssl.keystore.password: password
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: true
xpack.monitoring.collection.interval: 60s
xpack.monitoring.collection.pipeline.details.enabled: true

Here is the elasticsearch config on both elasticsearch nodes (each node has their own elastic-certificates.p12 corresponding to their own hostnames)

xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 60s
xpack.monitoring.collection.cluster.stats.timeout: 60s
xpack.monitoring.history.duration: 90d
xpack.watcher.history.cleaner_service.enabled: true
xpack.http.proxy.host: 'proxy host'
xpack.http.proxy.port: 3128
xpack.watcher.enabled: true
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p1

===================================

here is the beats pipeline config (beat-pipeline.conf):

=================
input {
beats {
port => 5044
client_inactivity_timeout => 120
#ssl => false
}
}
output {
elasticsearch {
user => "logstash_ingest"
password => "password"
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/elastic-stack-ca.pem"
action => "index"
hosts => ["elactisnode1", "elasticnode2"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Please help and see if there is anything missing or incorrect, help is needed here, indeed.

Thanks a lot

Li

Turned xpack.monitoring.elasticsearch.sniffing to false, the errors are gone.

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.