Remote Cluster Of Connection Problem With Xpack

Hi All,

Without x-pack, I provided a seamless connection as remote clusters in Kibana.

But, Xpack is enabled, remote cluster connections were broken. How can I use both together without problems?


Can you elaborate a little more here, do you mean you are enabling Security? If so what aspect? What errors are you seeing? What version are you on?

I have two different elasticsearch clusters like Cluster-A and Cluster-B. Each cluster has 2 masters and 3 data nodes. Additionally there is only one kibana. My goal is monitoring the both clusters in one kibana.

Elasticsearch version: 7.8.0
Licence: Basic

i have enabled the xpack security on each clusters. That is i have to pass "username / password" when i reach a cluster like this;

curl -u "elastic/password" http://IP-Cluster-A:9200
curl -u "elastic/password" http://IP-Cluster-B:9200

/etc/elasticsearch/elasticsearch.conf file was arranged for Cluster-A like this;

# ---------------------------------- Cluster -------------- --------------------- Cluster-A
# ------------------------------------ Node ------------ ------------------------ $ {HOSTNAME}
node.master: true false
node.ingest: false
# ----------------------------------- Paths ------------- ----------------------- /var/lib/elasticsearch/data
path.logs: /var/lib/elasticsearch/logs
path.repo: /var/lib/elasticsearch/backup
# ---------------------------------- Network -------------- ---------------------
http.port: 9200
transport.tcp.port: 9300 IP-Cluster-A
# --------------------------------- Discovery --------------- -------------------
discovery.seed_hosts: ["IP-Cluster-A-MasterNode1", "IP-Cluster-A-MasterNode2" ]
cluster.initial_master_nodes: ["IP-Cluster-A-MasterNode1", "IP-Cluster-A-MasterNode2"]
# ---------------------------------- X-Pack ------------ ----------------------- true true certificate /etc/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/elastic-certificates.p12

Also Centralize Kibana config file /etc/kibana/kibana.yml was arranged like this;

server.port: 5601 IP-Centralize-Kibana
elasticsearch.hosts: ["http://IP-Cluster-A-MasterNode1:9200", "http://IP-Cluster-A-MasterNode2:9200"] true
elasticsearch.username: "elastic"
elasticsearch.password: "password"

Question-1: it is not appropriate to set elasticsearch.username and password in kibana config file. Thats why we need to hide them with kibana-keystore like this; "bin/kibana-keystore add elasticsearch.username" and "bin/kibana-keystore add elasticsearch.password" in this case username and password are same for both clusters. So what if username/password would set different for each cluster how would we set the elasticsearch.username and elasticsearch.password in Centralize-Kibana config file? username and password should the same for each cluster in this case?

Question-2: How to set cross-cluster-search when xpack security is enabled to monitor both clusters in Centralize-Kibana?

Question-3: In the remote clustering definition, I get an error in Kibana as in the picture. How can I fix this error?

A Basic license does not cover that sorry to say. Check out for more info.

It is not important to be stack monitoring on Kibana. My goal is to search a single kibana with a security plugin in more clusters. Do I get stuck on the license in this regard?

Do you mean cross cluster search?

Yes @warkolm exactly, definitely I am pointing the cross-cluster-search. As I told you my goal is searcing all indexes, data, documents that belong to the different clusters in one kibana.
Therefore I need to add my two clusters in "Remote Cluster" But there is an issue.
The problem is that one of the Clusters is connected but the second one couldn't connect.
I shared the screen about it, please see above. I indicated all configurations very deeply.

Any help please?

You will need to check your Kibana and Elasticsearch logs to see what is and isn't happening.

Ok. I checked Kibana and Elasitcsearch logs.
The ELK nodes have the following errors in the logs.

[2020-07-20T08:55:10,218][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [ip-xxx.xx.xx:9300] client did not trust this server's certificate, closing connection Netty4TcpChannel{localAddress=/xxx.xx.xx.x:9300, remoteAddress=/xxx.xx.xx.x:53612}

io.netty.handler.codec.DecoderException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode( ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead( ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at$HeadContext.channelRead( [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at$ [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$ [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.internal.ThreadExecutorMap$ [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at [?:?]
Caused by: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]

How can i solve this error?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.