Installed and configured ElasticSearch (Multi Node) Cluster on CentOS


(Anantha Rao Naidu) #1

Hello

Greetings,

I have two queries here,

Query1:

I have installed ELK with two nodes (node1,node2) and configured in elasticsearch.yml with the same cluster name on both nodes and host name as node1 and node2 respectively but the node2 is not replicating with node1.

My Elastisearch.yml config

Primary node1:
cluster.name: “xxxLogs”
node.name: "node1"
node.master: True
node.data: True
network:”Host IP”

Secondary node2:
cluster.name: “xxxLogs”
node.name: "node2"
node.master: True (if i comment it its working if this changed either to True or False kibana page was not loading for me)
node.data: True
network:”Host IP”

Is there any other steps to auto discover the node2 server into same cluster ?

Could you please help me to configure cluster and auto discover the node2 into the same cluster for replicating with node1.

Query2:

Please suggest if i configure index patterns its possible to achieve two different data with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?

Thanks in Advance..


(Magnus Bäck) #2
network:”Host IP”

I'm not sure network is a valid configuration keyword.

Are you using multicast discovery? Have you looked in the logs of either machine? Are they able to find each other?


(Anantha Rao Naidu) #3

network:"Host IP" i mean to say its my server ip address.

Iam not using muticast discovery and i checked the logs there is no replication between nodes.

can you throw some lights on steps how to find the each other in cluster?


(Mark Walkom) #4

What version did you install, because 2.0 doesn't do multicast anymore.


(Magnus Bäck) #5

Iam not using muticast discovery and i checked the logs there is no replication between nodes.

If you're using unicast discovery, what do the unicast configuration options in elasticsearch.yml look like? With unicast discovery nodes must be explicitly configured to find each other.


(Anantha Rao Naidu) #7

Iam not using both multicast and unicast below is my elasticsearch.yml,

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
#discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
#    to perform discovery when new nodes (master or data) are started:
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]

(Magnus Bäck) #8

For best results, please

  • post non-comment lines from elasticsearch.yml (you can use sed '/^#/d; /^$/d' /etc/elasticsearch/elasticsearch.yml to collect them),
  • format them as preformatted (Ctrl+K), and
  • pay attention to the preview pane to the right.

(Anantha Rao Naidu) #9

Thanks for your update, please find below non comment lines from elasticsearch.yml,

Node1
cluster.name: ELKTEST
node.name: "node1"
network.host: 192.168.44.129

Node2
cluster.name: ELKTEST
node.name: "node2"
network.host: 192.168.44.130


(Magnus Bäck) #10

Use unicast discovery as described here: https://www.elastic.co/guide/en/elasticsearch/guide/current/_important_configuration_changes.html#_prefer_unicast_over_multicast

(And make sure each node has a unique node name. In your example above both are named node2.)


(Anantha Rao Naidu) #11

thanks. sorry it was typo error its node1 and node2.


(Anantha Rao Naidu) #12

We have done the below changes in the servers but unable to see the data sync.

cluster.name: ELKTEST
node.name: "node2"
network.host: 192.168.44.130
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.44.129", "192.168.44.130:9200"]

cluster.name: ELKTEST
node.name: "node1"
network.host: 192.168.44.129
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.44.129", "192.168.44.130:9200"]


(Christian Dahlqvist) #13

Elasticsearch internally communicate on port 9300, not 9200 which is used for the REST interface.


(Anantha Rao Naidu) #14

Still data is not syncing from master node(NODE1) data node(NODE2) after the below changes.

cluster.name: ELKTEST
node.name: "node1"
network.host: 192.168.44.129
transport.tcp.port: 9300
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.44.129", "192.168.44.130:9300"]

cluster.name: ELKTEST
node.name: "node2"
network.host: 192.168.44.130
transport.tcp.port: 9300
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.44.130", "192.168.44.129:9300"]


(Magnus Bäck) #15

What's in the logs? I'm sure there's something discovery-related there.


(Anantha Rao Naidu) #16

Thanks for your help.

With below configuration we are able to sync the data from master node(node1) to data node(node2)

cluster.name: ELKTEST
node.name: "node2"
network.host: 192.168.44.130
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.44.130", "192.168.44.129:9300"]

cluster.name: ELKTEST
node.name: "node1"
network.host: 192.168.44.129
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.44.129", "192.168.44.130:9300"]


(Anantha Rao Naidu) #17

Query1 is resolved.

Can you help us with below query2
Query2:

Please suggest if i configure index patterns its possible to achieve two different data from different storages with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?

Thanks in Advance..


(Magnus Bäck) #18

Please suggest if i configure index patterns its possible to achieve two different data from different storages with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?

Sorry, I don't know what you're asking here. You want to control the mapping between indexes and the underlying storage?


(Anantha Rao Naidu) #19

I have configured Node1 and Node2 servers both have different data can I see both node data in single search of Kibana dashboard.

Node1 has New logs
Node2 has Old logs

if thats possible please provide me some steps to configure.

Thanks.


(Magnus Bäck) #20

I think we have one of these two situations:

  • You've managed to get a split brain situation, i.e. you've basically forked your cluster. In a proper cluster there is only one set of indexes containing data and you can query any node and get the same results. Where the data is physically stored is irrelevant for clients making queries.
  • Everything's fine and you're focusing too much on what data is stored on each node.

How did you reach the conclusion that node1 has new logs and node2 has old logs? Are you not able to access a unified view of the cluster by using the REST APIs?


(Anantha Rao Naidu) #21

Yes am into this situation to store my data as below,

Node1 has New logs (Replica Stopped and imported New logs in Node1)
Node2 has Old logs

Is that possible to see both data in single search of kibana dashboard.