I have already one under cluster.Now i would like to add one more to the same cluster.I have made few changes in .yml file.but still new node is not added.Please find .yml file below;
NOTE: Elasticsearch comes with reasonable defaults for most settings.
Before you set out to tweak and tune the configuration, make sure you
understand what are you trying to accomplish and the consequences.
The primary way of configuring a node is via this file. This template lists
the most important settings you may want to configure for a production cluster.
Please consult the documentation for further information on configuration options:
Please find the code for config.yml file..i am not sure how to mention two nodes details in one config file
shall i maintain two config files for two nodes?
</> ---------------------------------- Cluster -----------------------------------
</>
</> Use a descriptive name for your cluster:
</> cluster.name: Elasticsearch
</>
</> ------------------------------------ Node1 ------------------------------------
</>
</> Use a descriptive name for the node:
</> node.name: shyamala ponnada
node.master: true
node.data: true
</>
</> Add custom attributes to the node:
</>
</>node.attr.rack: r1
</>
</> ------------------------------------ Node2 ------------------------------------
</> node.name: Production
node.master: true
node.data: false
</>
</> Add custom attributes to the node:
</>
</>node.attr.rack: r2
</>
</> ----------------------------------- Paths ------------------------------------
</>
</> Path to directory where to store the data (separate multiple locations by comma):
</>
</>path.data: /path/to/data
</>
</> Path to log files:
</>
</>path.logs: /path/to/logs
</>
</> ----------------------------------- Memory -----------------------------------
</>
</> Lock the memory on startup:
</>
</> bootstrap.memory_lock: true
</>
</> Make sure that the heap size is set to about half the memory available
</> on the system and that the owner of the process is allowed to use this
</> limit.
</>
</> Elasticsearch performs poorly when the system is swapping the memory.
</>
</> ---------------------------------- Network -----------------------------------
</>
</> Set the bind address to a specific IP (IPv4 or IPv6):
</>
network.host: 10.225.34.36
</>
</> Set a custom port for HTTP:
</>
http.port: 9200
</>
</> For more information, consult the network module documentation.
</>
</> --------------------------------- Discovery ----------------------------------
</>
</> Pass an initial list of hosts to perform discovery when new node is started:
</> The default list of hosts is ["127.0.0.1", "[::1]"]
</>
</>discovery.zen.ping.multicast.enabled: false
</>
</> Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
</>
</>discovery.zen.minimum_master_nodes: 3
</>
</> For more information, consult the zen discovery module documentation.
</>
</> ---------------------------------- Gateway -----------------------------------
</>
</> Block initial recovery after a full cluster restart until N nodes are started:
</>
</>gateway.recover_after_nodes: 3
</>
</> For more information, consult the gateway module documentation.
</>
</> ---------------------------------- Various -----------------------------------
</>
</> Require explicit names when deleting indices:
</>
</>action.destructive_requires_name: true
# Use a descriptive na# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your `cluster`:
#
cluster.name: Elasticsearch
#
# ------------------------------------ Node ------------------------------------
#me for the node:
#
node.name: shyamala ponnada
node.master: true
node.data: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# Use a descriptive name for the node:
#
node.name: Production
node.master: true
node.data: false
#
# Add custom attributes to the node:
#
#node.attr.rack: r2
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.225.34.36
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["shyamala ponnada","Production"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
# ----------------------------COR request--------------------------
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
To create another node under the same cluster, start another instance of elasticsearch just like you did for first node and give the same cluster name. When you check the health status you can see number of nodes 2.
I having one doubt.
I have only one linux server, in that i have installed Elasticsearch and created one node1 and successfully able to send data of QA.Now i want to create node2 in the same linux server and under same cluster and would like send PROD data to second node.So how can i approach?
I have installed Elastic search through RPM.As of now i have created two instances of ES is running but when i check for count of nodes its showing only one node.I have given different port number in 2nd instance of ES.when i look for the status of Elasticsearch2(second instance of ES)i am getting below message.
[root@LOUSSPLQS10 share]# service elasticsearch2 status
â elasticsearch2.service - LSB: This service manages the elasticsearch daemon
Loaded: loaded (/etc/rc.d/init.d/elasticsearch2; bad; vendor preset: disabled)
Active: active (exited) since Thu 2017-08-03 14:16:16 EDT; 25min ago
Docs: man:systemd-sysv-generator(8)
Process: 1789 ExecStop=/etc/rc.d/init.d/elasticsearch2 stop (code=exited, status=0/SUCCESS)
Process: 1793 ExecStart=/etc/rc.d/init.d/elasticsearch2 start (code=exited, status=0/SUCCESS)
Aug 03 14:16:10 LOUSSPLQS10.rsc.humad.com systemd[1]: Starting LSB: This service manages the elasticsearch daemon...
Aug 03 14:16:11 LOUSSPLQS10.rsc.humad.com runuser[1797]: pam_limits(runuser:session): invalid line 'elasticsearch nofile 65536' - skipped
Aug 03 14:16:11 LOUSSPLQS10.rsc.humad.com runuser[1797]: pam_unix(runuser:session): session opened for user elasticsearch by (uid=0)
Aug 03 14:16:16 LOUSSPLQS10.rsc.humad.com elasticsearch2[1793]: Starting elasticsearch:(node 2)[ OK ]
Aug 03 14:16:16 LOUSSPLQS10.rsc.humad.com elasticsearch2[1793]: touch: cannot touch â/var/lock/subsys/elasticsearch/2â: No such file or directory
Aug 03 14:16:16 LOUSSPLQS10.rsc.humad.com systemd[1]: Started LSB: This service manages the elasticsearch daemon.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.