Elastic Search SSL installation issue

I tried to increase the security of elasticsearch but facing the issue below. i am unable to find the solution of it. request to please help me. below are the logs.

Caused by: java.security.cert.CertificateParsingException: signed fields invalid
> at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1830) ~[?:?]
> at sun.security.x509.X509CertImpl.(X509CertImpl.java:188) ~[?:?]
> at sun.security.provider.X509Factory.parseX509orPKCS7Cert(X509Factory.java:476) ~[?:?]
> at sun.security.provider.X509Factory.engineGenerateCertificates(X509Factory.java:361) ~[?:?]
> ~[?:?]
> at java.security.cert.CertificateFactory.generateCertificates(CertificateFactory.java:478) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:94) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:86) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.getCertificateChain(PEMKeyConfig.java:71) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.createTrustManager(PEMKeyConfig.java:103) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.TrustConfig$CombiningTrustConfig.lambda$createTrustManager$0(TrustConfig.java:122) ~[?:?]
org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$2(SSLService.java:426) ~[?:?]
org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:423) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.(SSLService.java:119) ~[?:?]
> at org.elasticsearch.xpack.core.XPackPlugin.(XPackPlugin.java:146) ~[?:?]
> at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
> at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
> at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]

Caused by: java.security.cert.CertificateParsingException: signed fields invalid
> at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1830) ~[?:?]
> at sun.security.x509.X509CertImpl.(X509CertImpl.java:188) ~[?:?]
> at sun.security.provider.X509Factory.parseX509orPKCS7Cert(X509Factory.java:476) ~[?:?]
> at sun.security.provider.X509Factory.engineGenerateCertificates(X509Factory.java:361) ~[?:?]
> ~[?:?]
> at java.security.cert.CertificateFactory.generateCertificates(CertificateFactory.java:478) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:94) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:86) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.getCertificateChain(PEMKeyConfig.java:71) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.createTrustManager(PEMKeyConfig.java:103) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.TrustConfig$CombiningTrustConfig.lambda$createTrustManager$0(TrustConfig.java:122) ~[?:?]
> at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:382) ~[?:?]
> at java.util.HashMap.computeIfAbsent(HashMap.java:1133) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$2(SSLService.java:426) ~[?:?]
> at java.util.HashMap.forEach(HashMap.java:1333) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:423) ~[?:?]
~[elasticsearch-7.3.0.jar:7.3.0]
> ... 15 more

Caused by: java.security.cert.CertificateParsingException: signed fields invalid
> at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1830) ~[?:?]
> at sun.security.x509.X509CertImpl.(X509CertImpl.java:188) ~[?:?]
> at sun.security.provider.X509Factory.parseX509orPKCS7Cert(X509Factory.java:476) ~[?:?]
> at sun.security.provider.X509Factory.engineGenerateCertificates(X509Factory.java:361) ~[?:?]
> at java.security.cert.CertificateFactory.generateCertificates(CertificateFactory.java:478) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:94) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.CertParsingUtils.readCertificates(CertParsingUtils.java:86) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.getCertificateChain(PEMKeyConfig.java:71) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.PEMKeyConfig.createTrustManager(PEMKeyConfig.java:103) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.TrustConfig$CombiningTrustConfig.lambda$createTrustManager$0(TrustConfig.java:122) ~[?:?]
> at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) ~[?:?]
> at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
org.elasticsearch.xpack.core.ssl.TrustConfig$CombiningTrustConfig.createTrustManager(TrustConfig.java:123) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:382) ~[?:?]
> at java.util.HashMap.computeIfAbsent(HashMap.java:1133) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$2(SSLService.java:426) ~[?:?]
> at java.util.HashMap.forEach(HashMap.java:1333) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:423) ~[?:?]
> at org.elasticsearch.xpack.core.ssl.SSLService.(SSLService.java:119) ~[?:?]
> at org.elasticsearch.xpack.core.XPackPlugin.(XPackPlugin.java:146) ~[?:?]
> at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
> at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java

Please share your full configuration from elasticsearch.yml.

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: health-data
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 19200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["35.286.69.50", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
#---------------------------------- CORS ---------------------------------------
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length

#
#-----------------------------------X_PACK--------------------------------------
xpack.security.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/ssl/health_ai.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/health_ai.pfx
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.http.ssl.enabled: true
xpack.security.audit.enabled: true

You can't use PKCS#12 truststores for xpack.security.http.ssl.certificate,it only accepts a PEM encoded certificate.

See our docs here and here

I had change the yml file also.

now setting are :
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/ssl/certkey.pfx
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/ssl/health_ai.pfx

Logs Error are below-

Caused by: java.io.IOException: toDerInputStream rejects tag type 45
        at sun.security.util.DerValue.toDerInputStream(DerValue.java:873) ~[?:?]
        at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1981) ~[?:?]
        at sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:222) ~[?:?]
        at java.security.KeyStore.load(KeyStore.java:1472) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.TrustConfig.getStore(TrustConfig.java:89) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.StoreKeyConfig.createKeyManager(StoreKeyConfig.java:72) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:383) ~[?:?]
        at java.util.HashMap.computeIfAbsent(HashMap.java:1133) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$2(SSLService.java:426) ~[?:?]
        at java.util.HashMap.forEach(HashMap.java:1333) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:423) ~[?:?]
        at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:119) ~[?:?]
        at org.elasticsearch.xpack.core.XPackPlugin.<init>(XPackPlugin.java:146) ~[?:?]
        at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
        at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) ~[?:?]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:481) ~[?:?]
        at org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:605) ~[elasticsearch-7.3.0.jar:7.3.0]

How do you generate these health_ai.pfx and certkey_ai.pfx ? It looks like certkey_ai.pfx is not actually a PKCS#12 keystore but a PEM Certificate

I used https://decoder.link/

ssl converter -
rsa key converter-

Please take the time to explain yourself , the steps you took in detail. This will make it much more probable that someone from the community will engage with you and assist you to solve your issues.

I don't know how this online tool works, or what did you give as input to get that output so I can't really help much apart from telling you the obvious thing , that this .pfx file you got is not correct.

As a side note, it's not a good practice to upload your private keys to online services and passing them the keys password too.

Maybe start over , by telling us what keys and what certificates you have and in which format, and we can help you to transform them to something you can use with Elasticsearch.

I have certificate in pem format and then i converted the file by using this online service.
I visited this link https://decoder.link/converter.


uploaded the certificate ,key,bundle file and i get single PKCS#12 file in return.
I have a rsa key which changed by going to https://decoder.link/rsa_converter and get a PKCS# key in return.

This is the full process I followed to get this file. I apologize for not providing the full details.

Which certificate is that ? What do you want to use it for ?

What is this certificate,key,bundle file ? What format is it in ? what do you want to use the certificate and the key for ?

This is a Comodo SSL Certificate image , want to use it for enabling ssl into the elasticsearch for secure communication.
Below are the file type - ca-bundle,key,certificate-

Certificate and key used in this -
xpack.security.http.ssl.key: /etc/elasticsearch/ssl/health_ai.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/health_ai.pfx

pfx file is made of certificate,cabundle,key.

You probably don't need any of this, just set:

xpack.security.http.ssl.key: /etc/elasticsearch/ssl/certkey.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/health_ai.crt
xpack.security.http.ssl.certificate_authorities: ["/etc/elasticsearch/ssl/health_ai.ca-bundle"]

and you'll be fine for the http layer. As I understand, the bundle file is just the CA certificate in PEM format.

Then you are missing keys and certificates for your transport layer, you can't just enable transport tls like you did

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate

without telling elasticsearch what certificates and keys to use. I'd very much urge you to read our docs that I have shared in a previous post, which contain information on how to encrypt communications on the transport layer

I believe the documentation clearly states that you can you your own CA certificate. But you still need to use certutil provided by Elasticsearch to create the certificates for http SSL.
In my cluster, I've created stack, transport and http ssl cert using the same utility and everything is working fine.

This is incorrect.

Thanks ikakavas. You helped a lot to me in solving this. issue of this issue is resolved. Only Change I did after your recommendation is to use the chained certificate.

cat health_ai.crt health_ai.ca-bundle >> cert_chain.crt

I got a new warning in the elasticsearch logs when i start kibana.

[2019-08-14T04:56:30,856][WARN ][o.e.h.AbstractHttpServerTransport] [betterhealth-data] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:19200, remoteAddress=/127.0.0.1:52842}
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a417574686f72697a6174696f6e3a2042617369632061326c69595735684f6c644359585668526d785a555846785347643354564d3361564e350d0a486f73743a20302e302e302e303a31393230300d0a436f6e74656e742d4c656e6774683a20300d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a0d0a
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:582) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:536) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
        at java.lang.Thread.run(Thread.java:835) [?:?]

Please consider going through our documentation. This will help you understand what you are configuring and why and will also help you with addressing or even not even encountering some of these issues.

If you start from our tutorial in Encrypting communications | Elasticsearch Guide [7.3] | Elastic , moving to Setting up TLS on a cluster | Elasticsearch Guide [7.3] | Elastic, you will see that in step 4 it says:

Configure Kibana to encrypt communications between the browser and the Kibana server and to connect to Elasticsearch via HTTPS. See Configuring security in Kibana.

which in step 5 says :

which has a step 2 that says:

Configure Kibana to connect to Elasticsearch via HTTPS:

In short, the error you're getting is because Kibana attemtps to communicate to Elasticsearch over http , BUT you have already configured Elasticsearch to only accept http over TLS connections. So you need to configure Kibana to connect to Elasticsearch over https, the instructions and details are in our documentation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.