Cluster Setup 3 Node Cluster problem

Hi all, I have been working on getting a Elastic Stack cluster working for the past few weeks. I was told to setup the elasticsearch cluster as such. I have three nodes each one of these nodes has a elasticsearch, kibana, and logstash. I am told that I can connect a node with an E L and a K with two other nodes. The purpose for this would be redundancy. If anything goes awry on one node it will switch to the next node over. I am wondering if this is possible since the last week I have tried to get this cluster working. I have gotten the cluster running with a single node via the elasticsearch cluster setup, but I cant get any other nodes to join this said cluster. I could really use some help. If what I said is not possible I would really love to know.

Yes, Elasticsearch normally operates as a cluster of multiple nodes for resilience and/or capacity. When you say you can't get the nodes to join up, can you share a bit more detail? What do the log messages say?

A common problem with clustering is described by this note in the docs, which also describes the solution.

1 Like

To get the nodes to join into 1 cluster, you must configure every node to have the same cluster name.
Set the discovery.seed_hosts & cluster.initial_master_nodes as seen here. Once done, delete the data folder of every elasticsearch and restart them. They should all join into the same cluster.

This was what i did when i was experimenting with elk.

1 Like

I am constantly getting "master node not found" I am worried that I can't connect the clusters together via elasticsearch.

Ooo I have not done that last part yet I'll try that!

You need to set node.name - node.role and then clustername
Also when you start a node check what is publishing address what is binding to .

Ping all three servers from each other and check if you are reachable . Check if selinux and firewall settings are tuned .

Also can you paste your cluster configuration file if above steps does not work

I do not think that "master node not found" is a message that Elasticsearch would emit. Please share logs and error messages and so on exactly as they come from Elasticsearch, including surrounding context. It may not look relevant, but it's better to have more detail than less.

[2019-07-02T11:29:42,716][WARN ][o.e.c.c.ClusterFormationFailureHelper] [choice-node1] master not discovered yet: have discovered ; discovery will continue using ~[10.101.0.140:9200, 10.101.0.141:9200] from hosts providers and [{choice-node1}{ukRS6dHuTiWJVjVvpSIKBQ}{eqCAFzP9QPGkDd8lfDgZXA}{10.101.0.140}{10.101.0.140:9300}{ml.machine_memory=16820174848, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 52, last-accepted version 96 in term 52~

it says it is not able to communicate with the server mentioned in hosts .check if they are able to communicate with each other using ping and telnet on port 9200 and 9300

Is this the only message on your logs? I think there will also be some error messages, including stack traces, explaining the issue, although you might have to wait a few minutes for them to appear.

Okay I checked both of those ports 9300 isn't even listed, but 9200 has multiple working connections.

I will keep an eye on anymore errors that come through so far it has just been the same error over and over again.

I think your problem is that you're using port 9200 in discovery.seed_hosts rather than 9300, but I am surprised that this is not yielding more error messages.

It just gave me this one.

Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
at org.elasticsearch.transport.TcpTransport.readHeaderBuffer(TcpTransport.java:841) ~[elasticsearch-7.1.1.jar:7.1.1]

It's always a good idea to share the whole stack trace. It's normally quite hard to help from just a line or two.

In this case, that error is consistent with what I said above.

1 Like

Would you suggest me continuing this path of a cluster of nodes each node running all three of the services or would you suggest doing it the way the elastic website says I should have this setup? As in each node has one service and I make the cluster out of that.

@thev0yager
as david said if you post your config file elasticsearch.yml and more log entry someone will be able to help you.
This seems simple config problem.

cluster.name: choice-cluster
#node.name: corp-elk02
node.master: true
node.data: true
network.host: 10.101.0.141
http.port: 9300
discovery.seed_hosts: ["10.101.0.140:9300","10.101.0.141:9300"]
cluster.initial_master_nodes: ["10.101.0.140:9300","10.101.0.141:9300"]
xpack.security.enabled: false

My other node has an identical yml file accept the node name is different.

This is inconsistent with the messages you shared above. The message said this:

... discovery will continue using [10.101.0.140:9200, 10.101.0.141:9200] from hosts providers...

however your config now says this:

discovery.seed_hosts: ["10.101.0.140:9300","10.101.0.141:9300"]

Note the different port numbers.

If you've updated your config you should now be getting different messages. Did you update your config? What is Elasticsearch saying now? Can you share all the logs emitted for the first couple of minutes after startup, from all of your nodes?

Also your original post was about three nodes, but you seem to be talking about two nodes now. How many nodes are there?