[6.5] cross cluster search and Kibana integration

Hi everyone,
I'm trying to setup an environment according this architecture:

2 elasticsearch clusters --> kibana presentation

According to what I've found on the reference guide, in order to implement this architecture it is necessary:

  • 2 nodes (or group of nodes), each of them having their own cluster.name: value in the yml config file (for instance, cluster.name: cluster1 and cluster.name: cluster2

  • 1 node that performs the "cross cluster search"; this node (let's call it "kibana_connector")will be the kibana-elastic interface

This is part of the kibaba_connector's yml file

#cluster.name:my-application
cluster:
    remote:
        cluster1: 
            seeds: 127.0.0.1:9300
        cluster2: 
            seeds: 10.21.103.158:9300

cluster.remote.initial_connect_timeout: 300s
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: kibana_connector
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9222
#

Using this configuration, starting kibana_connector after the other two instances started, I receive the following error:

[2018-11-19T18:10:05,313][WARN ][o.e.d.z.UnicastZenPing   ] [kibana_connector] [1] failed send ping to {10.21.103.158:9300}{aYCZW14ISAWvBPNl9D1DuA}{10.21.103.158}{10.21.103.158:9300}
java.lang.IllegalStateException: handshake failed, mismatched cluster name [Cluster [cluster2]] - {10.21.103.158:9300}{aYCZW14ISAWvBPNl9D1DuA}{10.21.103.158}{10.21.103.158:9300}

Where is my mistake?

thank you
s

Please make the effort to format your post to be as readable as possible.

In particular, when posting configuration files, or code blocks, please use the </> button to pre-format the content. Without this, the forums will present your post as if it were in markdown format, which will obscure your content and make it harder to help you.

Hi,
edited the post ahead.

Waiting for someone's advice.
thank you in advance

s

I don't think there is anything wrong here.

For some reason (see below) your kibana_connector node thinks it might be supposed to form a cluster with your cluster2 node, so it sends a ping to that node, and then finds out that it's not the same cluster, so it prints out a warning.
There's not necessarily anything wrong with that - it's just a warning, nothing bad has happened.

There are a few possible reasons why that could be happening.
Either:

  1. You are running both nodes on the same host with default port numbers.
    By default, a node will ping its own host, in the same port range that it's configured for. And out-of-the-box, that's localhost on 9300-9400
    So if you start 2 nodes on that same host, one will bind to port 9300 and the other to 9301, and they will ping each other.
    Now, your node is showing 10.21.103.158, so it's not exactly that same setup, but since you've only shown "part" of the yml file, it might be the same cause, but you've changed a couple of config options.

  2. You've explicitly configured zen discovery ping nodes (discovery.zen.ping.unicast.hosts) to point to that node (10.21.103.158:9300). If so, that's probably a mistake.

  3. You're using a different discovery mechanism (like EC2) and that discovery mechanism thinks the 10.21.103.158 node is a candidate for your cluster.

Without seeing more details on your setup, it's hard to know which of those might apply, but they're the likely causes.
If it is one of those, then it's not a massive issue, but you should probably fix it because there's no point pinging nodes that aren't part of your cluster.

Thank you for your reply Tim.
here following you can find nodes configurations:

kibana_connector (running in localhost)

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster:
    remote:
       cluster1: 
            seeds: 127.0.0.1:9300
        cluster2: 
            seeds: 10.21.103.158:9300

cluster.remote.initial_connect_timeout: 300s
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: kibana_connector
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9222
#
# For more information, consult the network module documentation.
#

cluster1 (localhost too)

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: cluster1
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#

cluster2 (different host)

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: cluster2
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.21.103.158
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#

Here the warnings (cluster2 binding seems to go well now, but I cannot find references in kibana_connector's log, executed in verbose mode, about the second cluster

[2018-11-20T11:39:49,139][INFO ][o.e.t.TransportService   ] [kibana_connector] publish_address {127.0.0.1:9301}, bound_addresses {[::1]:9301}, {127.0.0.1:9301}
[2018-11-20T11:39:50,584][WARN ][o.e.b.BootstrapChecks    ] [kibana_connector] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2018-11-20T11:39:50,587][WARN ][o.e.b.BootstrapChecks    ] [kibana_connector] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-11-20T11:39:50,920][WARN ][o.e.d.z.UnicastZenPing   ] [kibana_connector] [1] failed send ping to {[::1]:9300}{nwHyE7sNQDKfn4aM3frkmA}{0:0:0:0:0:0:0:1}{[::1]:9300}
java.lang.IllegalStateException: handshake failed, mismatched cluster name [Cluster [cluster1]] - {[::1]:9300}{nwHyE7sNQDKfn4aM3frkmA}{0:0:0:0:0:0:0:1}{[::1]:9300}

Thank you again
s

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.