How to set up multiple clusters

Hi, I am new in ES, I want to use multiple clusters to store data.
My OS is windows 10.
How could I set up the multiple cluster?Tune the config ? or doing something?

this is the Elasticsearch.yml:

thank you in advance!

Hi,

Just mention your requirement that how many nodes you have for configuration of ES cluster.
for ES cluster best approach to use more than 2 nodes and then do configuration settings according to that.

I need two nodes to store data.
But I don't know how to set up it ><

with two nodes set up, you should follow the steps:
1: make sure cluster name in elasticsearch.yml file is same.
2: both VM ip (both nodes IP) should be mentioned in "discovery.zen.unicast.hosts" parameter as:
discovery.zen.unicast.hosts: ["host1 IP", "host2 IP"]
3: both nodes should be reachable i.e. both are able to communicate . for this make sure both host IP in /etc/host file on each node.
4: network.host parameter should be set as "localhost"
5: port 9200 and 9300 should be open on both nodes.
for more details refer this:
https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html

thanks for your reply :slight_smile: .this is my config:
Is it set up correct ?

======================== Elasticsearch Configuration =========================

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

#node.name: node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 192.168.0.159

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["192.168.0.112", "192.168.0.159"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

all configuration settings is correct except only one thing.
make sure network.host value should be node specific ip for both nodes.
both value should be different as per node ip.

this is my another pc config.

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name:my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name:howard

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host:192.168.0.112

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["192.168.0.112", "192.168.0.159"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

when I run Elasticsearch.
it occur error:
C:\ELK\elasticsearch-5.5.1\bin>elasticsearch
Exception in thread "main" ElasticsearchParseException[malformed, expected settings to start with 'object', instead was [VALUE_STRING]]
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:73)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:52)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:50)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1044)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1033)
at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:100)
at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:72)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.Command.main(Command.java:88)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)

try out with correct syntax for cluster name and node name parameter as:
cluster.name : clustername
(parametername space then colon again space then value)

node.name : node name

@poojagupta thank you!
I have two nodes now.
"nodes" : {
"CjNQHxfNTcaJ5jsDfNsx9w" : {
"name" : "howard",
"ephemeral_id" : "Y1rP56oARwuZvVJM_ZEQvQ",
"transport_address" : "192.168.0.112:9300",
"attributes" : { }
},
"NbhszMcSR-Cd9ebmlmww2g" : {
"name" : "test1",
"ephemeral_id" : "crxQ45f7TeidP4gazU54Zw",
"transport_address" : "192.168.0.159:9300",
"attributes" : { }
}
},

but on howard ES command line it shows me the warnning.

[2017-11-28T15:28:21,814][WARN ][o.e.g.DanglingIndicesState] [howard] [[.kibana/xPzZrGx2QOqpPd1IZ6LLLA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2017-11-28T15:28:21,814][WARN ][o.e.g.DanglingIndicesState] [howard] [[packetbeat-6.0.0-2017.11.24/xasMRFSFSdqhp3wZmHmHXQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2017-11-28T15:28:21,815][WARN ][o.e.g.DanglingIndicesState] [howard] [[.kibana/on41jbWOTvq0U94uTy_TfQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2017-11-28T15:28:21,818][WARN ][o.e.g.DanglingIndicesState] [howard] [[logstash-2017.11.24/mNhVZ-YkTEeLS96NLkVExA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata

As seeing logs it seems there is duplicacy in indices. you have created some indexes with that node that is overlapping now while starting cluster.
for it you can try out with changing node name and again restart the ES cluster.

I change one of the node name howard->Howard
the situation is the same . Or should I delete all the same indices?
thank you !
"nodes" : {
"NbhszMcSR-Cd9ebmlmww2g" : {
"name" : "test1",
"ephemeral_id" : "crxQ45f7TeidP4gazU54Zw",
"transport_address" : "192.168.0.159:9300",
"attributes" : { }
},
"CjNQHxfNTcaJ5jsDfNsx9w" : {
"name" : "Howard",
"ephemeral_id" : "KxM_vQvwRC6ZvyFZ3OroTg",
"transport_address" : "192.168.0.112:9300",
"attributes" : { }
}
},

or because I didn't set the data path separately?

yes there is some issue with your created indices . for resolving it clear all data (old indices) and also change the cluster name for all nodes and again try to join it i.e. start the elasticsearch on both nodes after changing cluster name and deleting indices. Doing so, it will create new cluster with new data.

Now I have two nodes in the same cluster , one node.name is howard another is test1
but on node.name:howard host I query the node state:
http://192.168.0.116:9200/_cluster/state?pretty
image

on node.name:test1
http://192.168.0.159:9200/_cluster/state?pretty
image
there's only one node
I don't know how to explain it, I think it should display both nodes on each host.

thank you very much!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.