I have installed ELK with two nodes (node1,node2) and configured in elasticsearch.yml with the same cluster name on both nodes and host name as node1 and node2 respectively but the node2 is not replicating with node1.
Secondary node2:
cluster.name: “xxxLogs”
node.name: "node2"
node.master: True (if i comment it its working if this changed either to True or False kibana page was not loading for me)
node.data: True
network:”Host IP”
Is there any other steps to auto discover the node2 server into same cluster ?
Could you please help me to configure cluster and auto discover the node2 into the same cluster for replicating with node1.
Query2:
Please suggest if i configure index patterns its possible to achieve two different data with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?
Iam not using muticast discovery and i checked the logs there is no replication between nodes.
If you're using unicast discovery, what do the unicast configuration options in elasticsearch.yml look like? With unicast discovery nodes must be explicitly configured to find each other.
Iam not using both multicast and unicast below is my elasticsearch.yml,
# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
#discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]
Please suggest if i configure index patterns its possible to achieve two different data from different storages with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?
Please suggest if i configure index patterns its possible to achieve two different data from different storages with two nodes configured in same cluster both the data can be seen in kibana dashboard or any other way out there?
Sorry, I don't know what you're asking here. You want to control the mapping between indexes and the underlying storage?
You've managed to get a split brain situation, i.e. you've basically forked your cluster. In a proper cluster there is only one set of indexes containing data and you can query any node and get the same results. Where the data is physically stored is irrelevant for clients making queries.
Everything's fine and you're focusing too much on what data is stored on each node.
How did you reach the conclusion that node1 has new logs and node2 has old logs? Are you not able to access a unified view of the cluster by using the REST APIs?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.