Limitation of client node (Tribe node)

Hello All,
This is my configuration :
curl -XGET 172.24.69.19:9200/_cat/nodes :

172.24.69.14 16 36 0 0.00 0.01 0.05 - - es-client-02
172.24.69.18 95 97 20 1.64 1.28 1.25 d - es-data-03
172.24.69.20 15 36 0 0.05 0.05 0.05 m * es-master-02
172.24.69.19 10 36 0 0.02 0.02 0.05 m - es-master-01
172.24.69.16 98 97 20 1.11 0.93 1.13 d - es-data-01
172.24.69.17 98 98 10 0.60 0.61 0.68 d - es-data-02

I add another Tribe node to load balance data that come from my logstashes
and another Tribe node to connect kibana to it.
I think that we have limitation in this nodes , as you see cluster work only with one triger node,
is this true or i have mistake in configuration
Best

What version are you on?

A tribe node is not the same as a coordinating only node (aka client node). A coordinating node is part of a single cluster and allows both reads and writes, while a tribe node acts as a read-only bridge when searching multiple clusters.

the version is elasticsearch-5.4.0.rpm
in my cluster I have 2 logstash server with the following configuration :

input {
redis {
host => "172.24.69.9"
data_type => "list"
port => "6379"
key => "sadra"
}

redis {
host => "172.24.69.10"
data_type => "list"
port=> "6379"
key => "sadra"
}
}

output {

elasticsearch {
hosts => [ "172.24.69.13:9200","172.24.69.14:9200"]
I get logs from two broker and sent it to Elastic triger node with the following ips:
172.24.69.13
172.24.69.14

all elasticsearches version's is : 5.4.0.

You don't want tribe nodes. You don't even want cross cluster.

Just set up some coordinator nodes like you have - they used to be called client nodes as @Christian_Dahlqvist mentioned :slight_smile:

I don't understand !!!!
would you please tell me true configuration .
I set this configurations :

data nodes :
node.name: es-data-*
node.master: false
node.data: true
node.ingest: false
discovery.zen.ping.unicast.hosts: ["172.24.69.16","172.24.69.17","172.24.69.18","172.24.69.19","172.24.69.20","172.24.69.13","172.24.69.14","172.24.69.21"]
network.host: 172.24.69.17
cluster.name: logserver

master nodes :
cluster.name: logserver
node.name: es-master-*
node.master: true
node.data: false
node.ingest: false
network.host: 172.24.69.20
discovery.zen.ping.unicast.hosts: ["172.24.69.21","172.24.69.16","172.24.69.17","172.24.69.18","172.24.69.19","172.24.69.20","172.24.69.13","172.24.69.14"]

triger nodes :
node.name: es-client-0*
node.master: false
node.data: false
node.ingest: false
cluster.name: logserver
discovery.zen.ping.unicast.hosts: ["172.24.69.21",172.24.69.16","172.24.69.17","172.24.69.18","172.24.69.19","172.24.69.20","172.24.69.13","172.24.69.14"]

I have 2 master,3 data node , 3 client node
my issue is that my cluster just see only one client node and when i shutdown this node, another client node comes to cluster .
please tell me is this configuration is true and whats is my main problem ?

As master election is based on consensus, you should always aim to have 3 master-eligible nodes in the cluster and set'discovery.zen.minimum_master_nodes to 2 as per these recommendations.

would you please tell me whats is the problem on coordinator nodes ? why my cluster just see one node ?

What is the output from the cat nodes API, from master node as well as the coordinating nodes?

[root@es-data-01 ~]# curl -XGET http://172.24.69.16:9200/_cat/nodes
172.24.69.16 97 98 19 0.64 0.86 1.06 d - es-data-01
172.24.69.13 11 36 0 0.00 0.01 0.05 - - es-client-01
172.24.69.19 19 36 0 0.01 0.02 0.05 m - es-master-01
172.24.69.20 26 37 1 0.08 0.07 0.06 m * es-master-02
172.24.69.18 98 98 19 1.19 1.04 1.15 d - es-data-03

Are the remaining nodes running? Is there anything in the logs on these nodes that explain this? Are you able to telnet to port 9300 on all the other hosts from these hosts?

yes i telnet to all of them and if I shutdown es-client-01node, another coordinator nodes come to cluster, is there any configuration that limit coordinator node ?

[root@es-client-01 ~]# telnet 172.24.69.14
Trying 172.24.69.14...
telnet: connect to address 172.24.69.14: Connection refused
[root@es-client-01 ~]# telnet 172.24.69.14 9300
Trying 172.24.69.14...
Connected to 172.24.69.14.
Escape character is '^]'.

Connection closed by foreign host.

[root@es-client-01 ~]# telnet 172.24.69.21 9300
Trying 172.24.69.21...
Connected to 172.24.69.21.
Escape character is '^]'.

Connection closed by foreign host.

[root@es-client-01 ~]# shutdown

when i shutdown this node another one comes to cluster :

[root@es-data-01 ~]# curl -XGET http://172.24.69.16:9200/_cat/nodes
172.24.69.16 99 98 15 0.75 0.91 0.99 d - es-data-01
172.24.69.14 19 36 0 0.02 0.02 0.05 - - es-client-02
172.24.69.19 9 36 0 0.00 0.02 0.05 m - es-master-01
172.24.69.20 28 37 1 0.08 0.04 0.05 m * es-master-02
172.24.69.18 98 98 17 0.60 0.90 1.07 d - es-data-03

Hellowould you please help me and continue topic
thanks

How did you install Elasticsearch on the nodes? Could be having an issue similar to this thread?

I downloaded rpm file from elastic.co and install it with the following command :
yum localinstall elas.rpm

Christian_Dahlqvist Elastic Team Member
September 19 |

How did you install Elasticsearch on the nodes? Visit Topic or reply to this email to respond.
In Reply To

Amin1 Amin
September 19 |

Hellowould you please help me and continue topic thanks Visit Topic or reply to this email to respond. To unsubscribe from these emails, click here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.