How to form a cluster?

I have two nodes as below which belong to the same cluster elasticsearch. However, the number of nodes showing only 1 instead of 2. How to form a cluster?

{
"name" : "OANlHwb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "hHfTcg_BTfOH9PBkuOXXww",
"version" : {
"number" : "6.0.0",
"build_hash" : "8f0685b",
"build_date" : "2017-11-10T18:41:22.859Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2291,
"active_shards" : 2291,
"relocating_shards" : 0,
"initializing_shards" : 4,
"unassigned_shards" : 2423,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 11,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 450065,
"active_shards_percent_as_number" : 48.558711318355236
}

{
"name" : "DMSUATOWAS",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Wo11S5vZT_akDT5CHKkeIA",
"version" : {
"number" : "6.0.0",
"build_hash" : "8f0685b",
"build_date" : "2017-11-10T18:41:22.859Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5018,
"active_shards" : 5018,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5018,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

Please show your settings, and remember to use the </> (code) button to make sure things are correctly formatted.

Do you mean the elasticsearch.yml file?

Yep.

I have only changed network.host to the actual IP of the node in the elasticsearch.yml file. So, the default cluster name of elasticsearch. Does this work? Or I need to change the cluster.name and node.name as well?

That is a LOT of shards for a 1/2 node cluster. I would recommend reducing this significantly before clustering the nodes and getting replicas enabled. Have a look at this blog post for guidance around shards and sharding.

I haven't configured the shards. Isn't it default to 5? Why it got a lot of shards?

Default is 5 shards, which may very well be too much for your use case. The link I provided provides some reasonably generic guidance of target shards size and count per node.

The most likely cause is that your nodes cannot communicate with one another.
If you didn't configure anything else in your elasticsearch.yml configuration, then the nodes don't know that they're supposed to be part of a multi-node cluster, so if they can't find another node to join up with, they will happily form a single-node cluster on their own.

Without knowing how your network is setup, I can't offer concrete advice on this, but the most likely cause is:

  • You have changed network.host to be the public IP address for your machines. That means it binds to that IP only, and does not bind the localhost/127.0.0.1 interface any more
  • You have not changed discovery.zen.ping.unicast.hosts so it's using the default value which only looks for other nodes on the same machine using the local interface (127.0.0.1 and the equivalent for IPv6)

The fix for you is probably going to be to set discovery.zen.ping.unicast.hosts.
In order to prevent your nodes from creating two separate clusters (when they are unable to communicate) you should also look at either setting discovery.zen.minimum_master_nodes: 2 on both nodes, or set node.master: false on one of the nodes.

Note: A 2 node cluster isn't really a good idea, but I assume this is simply a test/lab setup rather than a production scenario.

I got one node that I have changed the network.host to actual IP in elasticsearch.yml. But it seems that any changes in elasticsearch.yml won't applied even after I have restarted the elasticsearch service or reboot the whole server. The elasticsearch can be accessed by http://127.0.0.1:9200/ but not the actual IP even though I have configured the network.host to actual IP. I used a Windows version of elasticsearch. Is there anything wrong with it? Or it will be better if I install it again?

What you are experiencing doesn't sound right. In these situations it usually turns out to be a configuration problem such as:

  • editing the wrong configuration file
  • not removing the leading # character from the relevant line in the configuration file
  • not actually restarting Elasticsearch (or restarting the wrong instance)

I'm happy to help track it down with you, but there's not enough detail on your message to be able to narrow down the problem.

Steps you could take are:

  • add some garbage characters to the bottom of your configuration file, and restart ES. The node should fail. This will help confirm whether you are editing the correct file.
  • post your config file here for us to check.
  • check the ES logs for more details. The logs tell you what address it's trying to bind to.

The elasticsearch.yml is in the path C:\ELK-Stack\elasticsearch\config. I have removed the IPs here.
I found that it is missing the data folder under elasticsearch directory and hence no index. How to fix it?

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

#node.name: node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host:

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["", ""]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 2

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

There is also no logs folder created. I noticed that there are no data and logs folder under the elasticsearch directory when extracting from the source file. What wrong with it?

Can you please edit that config and use the </> button to format it as code, it's really hard to read as is.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.