Elk stack and certificates

Hello,

I'm new to ELK and I recently upgraded the 6.0.0 ELK stack to 6.6.1. Everything went pretty okay but for some reason all the certificates stopped working. I'm using docker containers and this all worked before. I'm trying to figure out where to begin and start to unravel the issues.

Filebeat 6.6.1 is running on web app servers
This sends logs to Logstash 6.6.1 container on a separate VM
Then I have elasticsearch 6.6.1 container on another vm
with Kibana 6.6.1 container on yet another vm.

I have tried following the documentation but since I never set this up I'm sort of lost on where to begin.

The logs are not getting consumed by logstash and logstash isn’t talking to elasticsearch.

Logstash complains about the certificate and the certificate hasn’t changed.

Errors with filebeat
2019-02-20T22:49:15.211117667Z 2019-02-20T22:49:15.210Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":220,"time":{"ms":11}},"total":{"ticks":410,"time":{"ms":14},"value":410},"user":{"ticks":190,"time":{"ms":3}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"7f1cc96a-6d9f-4662-a955-4b81486b95a2","uptime":{"ms":60029}},"memstats":{"gc_next":17835520,"memory_alloc":11226752,"memory_total":25692880}},"filebeat":{"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"errors":1},"write":{"bytes":159}},"pipeline":{"clients":4,"events":{"active":4119,"retry":2048}}},"registrar":{"states":{"current":4}},"system":{"load":{"1":0.34,"15":1.27,"5":0.6,"norm":{"1":0.0213,"15":0.0794,"5":0.0375}}}}}}
2019-02-20T22:49:45.209583732Z 2019-02-20T22:49:45.209Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":230,"time":{"ms":5}},"total":{"ticks":430,"time":{"ms":8},"value":430},"user":{"ticks":200,"time":{"ms":3}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"7f1cc96a-6d9f-4662-a955-4b81486b95a2","uptime":{"ms":90028}},"memstats":{"gc_next":17835520,"memory_alloc":11527936,"memory_total":25994064}},"filebeat":{"events":{"active":1,"added":1},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":4,"events":{"active":4120,"total":1}}},"registrar":{"states":{"current":4}},"system":{"load":{"1":0.34,"15":1.24,"5":0.57,"norm":{"1":0.0213,"15":0.0775,"5":0.0356}}}}}}
2019-02-20T22:49:56.904999862Z 2019-02-20T22:49:56.904Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:443)): read tcp 172.18.0.2:43368->10.10.0.22:443: read: connection reset by peer
2019-02-20T22:49:56.905041162Z 2019-02-20T22:49:56.904Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://logstash:443)) with 6 reconnect attempt(s)
2019-02-20T22:50:15.210925904Z 2019-02-20T22:50:15.210Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":240,"time":{"ms":9}},"total":{"ticks":440,"time":{"ms":12},"value":440},"user":{"ticks":200,"time":{"ms":3}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"7f1cc96a-6d9f-4662-a955-4b81486b95a2","uptime":{"ms":120025}},"memstats":{"gc_next":17835520,"memory_alloc":11847104,"memory_total":26313232}},"filebeat":{"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"errors":1},"write":{"bytes":159}},"pipeline":{"clients":4,"events":{"active":4120,"retry":2048}}},"registrar":{"states":{"current":4}},"system":{"load":{"1":0.29,"15":1.2,"5":0.54,"norm":{"1":0.0181,"15":0.075,"5":0.0338}}}}}}

Errors with logstash
2019-02-20T22:49:56.913114932Z [2019-02-20T22:49:56,912][WARN ][org.logstash.beats.Server] Exception caught in channel initializer
2019-02-20T22:49:56.913162932Z java.lang.IllegalArgumentException: File does not contain valid private key: /usr/share/logstash/config/certs/service.key

2019-02-20T22:49:56.913260931Z Caused by: java.security.spec.InvalidKeySpecException: Neither RSA, DSA nor EC worked
2019-02-20T22:49:56.913264631Z at io.netty.handler.ssl.SslContext.getPrivateKeyFromByteBuffer(SslContext.java:1046) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-20T22:49:56.913268531Z at io.netty.handler.ssl.SslContext.toPrivateKey(SslContext.java:1015) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-20T22:49:56.913272330Z at io.netty.handler.ssl.SslContextBuilder.keyManager(SslContextBuilder.java:268) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-20T22:49:56.913276130Z ... 20 more
2019-02-20T22:49:56.913279730Z Caused by: java.security.spec.InvalidKeySpecException: java.security.InvalidKeyException: IOException : algid parse error, not a sequence

Any help would be greatly appreciated.

do I need to attack it like this

Figure out how to get filebeats to logstash working with SSL

Then logstash to elasticsearch with SSL
then kibana to elasticseach?

SO I have Filebeats connecting to logstash.

I'm still stuggling with getting logstash to connect to elasticsearch.

in elasticsearch I'm seeing this:
2019-02-21T15:39:22.488125445Z [2019-02-21T15:39:22,487][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [05w2lDl] http client did not trust this server's certificate, closing connection [id: 0xc1159c29, L:0.0.0.0/0.0.0.0:9200 ! R:/10.10.0.22:49470]

on logstash I'm seeing this:
2019-02-21T15:49:26.052155453Z [2019-02-21T15:49:26,051][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@elasticsearch:443/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@elasticsearch:443/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

logstash.conf
output {
elasticsearch {
hosts => [ "https://elasticsearch:443" ]
ssl => true
cacert => '/usr/share/logstash/config/certs/service.crt'
user => 'elastic'
password => ''
}
}

elasticsearch.yml
xpack.security.http.ssl.enabled : true
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.enabled : true
xpack.ssl.key : /usr/share/elasticsearch/config/certs/service.key
xpack.ssl.certificate : /usr/share/elasticsearch/config/certs/service.crt
xpack.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt

should I instead do something with keystore and truststore from the documents?

Any thoughts??

Based on filenames, it seems that your Logstash cacert is pointing to the certificate of the Elasticsearch server, not the CA for that certificate.
It looks like you probably want

cacert => '/usr/share/logstash/config/certs/ca.crt'

Also,
You are using the xpack.ssl.* settings to configure TLS on the Elasticsearch side. The documentation no longer recommends using those, and they are being removed from Elasticsearch 7.0. Instead you should configure each "ssl.*" context explicitly.

I would encourage you to take the time now to switch to using the following:

xpack.security.http.ssl.enabled : true
xpack.security.http.ssl.certificate :  /usr/share/elasticsearch/config/certs/service.crt
xpack.security.http.ssl.key :  /usr/share/elasticsearch/config/certs/service.key

xpack.security.transport.ssl.enabled : true
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.key : /usr/share/elasticsearch/config/certs/service.key
xpack.security.transport.ssl.certificate : /usr/share/elasticsearch/config/certs/service.crt
xpack.security.transport.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt

Hi Tim!

Thanks for the feedback. I've copied all certificates to each vm so that they all are located locally. This is why they point to their own directories.

I made the ssl.* changes to elasticsearch.yml. restarted both logstash and elasticsearch containers

I'm still getting errors in logstash. But no ssl errors in elasticsearch.

2019-02-22T14:31:47.218898945Z [2019-02-22T14:31:47,218][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5400, remote: 10.10.0.10:41286] Handling exception: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE
2019-02-22T14:31:47.219388240Z [2019-02-22T14:31:47,218][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
2019-02-22T14:31:47.219406240Z io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE
2019-02-22T14:31:47.219427140Z  at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219448640Z  at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219453140Z  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219457239Z  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219461339Z  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219465239Z  at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219469239Z  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219529939Z  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219539939Z  at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219544539Z  at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219548739Z  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219554439Z  at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219558739Z  at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219562739Z  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219566738Z  at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219570938Z  at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219575138Z  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-02-22T14:31:47.219579138Z Caused by: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE
2019-02-22T14:31:47.219582938Z  at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:897) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219586938Z  at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.sslReadErrorResult(ReferenceCountedOpenSslEngine.java:1147) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219596638Z  at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1101) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219601138Z  at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1169) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219605338Z  at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1212) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219609738Z  at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:216) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219613738Z  at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1297) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219617738Z  at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1211) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219621838Z  at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1245) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219626338Z  at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219665538Z  at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
2019-02-22T14:31:47.219671638Z  ... 16 more

All the logs are being consumed however. I still don't like this error and I'm not sure what it's about.

Chris

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.