Remote Cluster Of Connection Problem With Xpack

Hi All,

Without x-pack, I provided a seamless connection as remote clusters in Kibana.

But, Xpack is enabled, remote cluster connections were broken. How can I use both together without problems?

Thanks.

Can you elaborate a little more here, do you mean you are enabling Security? If so what aspect? What errors are you seeing? What version are you on?

I have two different elasticsearch clusters like Cluster-A and Cluster-B. Each cluster has 2 masters and 3 data nodes. Additionally there is only one kibana. My goal is monitoring the both clusters in one kibana.

Elasticsearch version: 7.8.0
Licence: Basic

i have enabled the xpack security on each clusters. That is i have to pass "username / password" when i reach a cluster like this;

curl -u "elastic/password" http://IP-Cluster-A:9200
curl -u "elastic/password" http://IP-Cluster-B:9200

/etc/elasticsearch/elasticsearch.conf file was arranged for Cluster-A like this;

# ---------------------------------- Cluster -------------- ---------------------
cluster.name: Cluster-A
# ------------------------------------ Node ------------ ------------------------
node.name: $ {HOSTNAME}
node.master: true
node.data: false
node.ingest: false
# ----------------------------------- Paths ------------- -----------------------
path.data: /var/lib/elasticsearch/data
path.logs: /var/lib/elasticsearch/logs
path.repo: /var/lib/elasticsearch/backup
# ---------------------------------- Network -------------- ---------------------
http.port: 9200
transport.tcp.port: 9300
network.host: IP-Cluster-A
# --------------------------------- Discovery --------------- -------------------
discovery.seed_hosts: ["IP-Cluster-A-MasterNode1", "IP-Cluster-A-MasterNode2" ]
cluster.initial_master_nodes: ["IP-Cluster-A-MasterNode1", "IP-Cluster-A-MasterNode2"]
# ---------------------------------- X-Pack ------------ -----------------------
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

Also Centralize Kibana config file /etc/kibana/kibana.yml was arranged like this;

server.port: 5601
server.host: IP-Centralize-Kibana
elasticsearch.hosts: ["http://IP-Cluster-A-MasterNode1:9200", "http://IP-Cluster-A-MasterNode2:9200"]
xpack.security.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "password"

Question-1: it is not appropriate to set elasticsearch.username and password in kibana config file. Thats why we need to hide them with kibana-keystore like this; "bin/kibana-keystore add elasticsearch.username" and "bin/kibana-keystore add elasticsearch.password" in this case username and password are same for both clusters. So what if username/password would set different for each cluster how would we set the elasticsearch.username and elasticsearch.password in Centralize-Kibana config file? username and password should the same for each cluster in this case?

Question-2: How to set cross-cluster-search when xpack security is enabled to monitor both clusters in Centralize-Kibana?

Question-3: In the remote clustering definition, I get an error in Kibana as in the picture. How can I fix this error?

A Basic license does not cover that sorry to say. Check out Subscriptions | Elastic Stack Products & Support | Elastic for more info.

It is not important to be stack monitoring on Kibana. My goal is to search a single kibana with a security plugin in more clusters. Do I get stuck on the license in this regard?

Do you mean cross cluster search?

Yes @warkolm exactly, definitely I am pointing the cross-cluster-search. As I told you my goal is searcing all indexes, data, documents that belong to the different clusters in one kibana.
Therefore I need to add my two clusters in "Remote Cluster" But there is an issue.
The problem is that one of the Clusters is connected but the second one couldn't connect.
I shared the screen about it, please see above. I indicated all configurations very deeply.

Any help please?

You will need to check your Kibana and Elasticsearch logs to see what is and isn't happening.

Ok. I checked Kibana and Elasitcsearch logs.
The ELK nodes have the following errors in the logs.

[2020-07-20T08:55:10,218][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [ip-xxx.xx.xx:9300] client did not trust this server's certificate, closing connection Netty4TcpChannel{localAddress=/xxx.xx.xx.x:9300, remoteAddress=/xxx.xx.xx.x:53612}

io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:325) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:268) ~[?:?]

How can i solve this error?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.