Need help with setting up 1 masternode and two datanodes set up

Hello All, 

As we are implementing ELK in our org as a pilot project, I'm trying to set up 3 nodes Elastic cluster (all acts master and data nodes ), I have created Ubuntu servers in Azure cloud with specific hardware configs. I just stuck at configuring the "Elasticsearch.yml" file on the master node(10.208.xx.xx), here, I'd mainly like to know the configs which I need to change in the .yml file

    Elasticsearch.yml on Masternode

 
        `Preformatted text`# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elastic-dev
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.master: true
node.data: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.208.xx.xx
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.208.xx.xx", "10.208.yy.yy"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["10.208.xx.xx", "10.208.yy.yy"]
discovery.zen.minimum_master_nodes: 2
#discovery.zen.ping.unicast.hosts: ["10.208.xx.xx", "10.208.yy.yy"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: instance/instance.key
xpack.security.http.ssl.certificate: instance/instance.crt
xpack.security.http.ssl.certificate_authorities: ca/ca.crt
xpack.security.transport.ssl.key: instance/instance.key
xpack.security.transport.ssl.certificate: instance/instance.crt
xpack.security.transport.ssl.certificate_authorities: ca/ca.crt


#xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
#xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
#xpack.security.authc.api_key.enabled: true
#

# This turns on SSL for the HTTP (Rest) interface
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: "http.p12"
#elasticsearch.ssl.certificateAuthorities: "ca.p12"

"Elasticsearch.yml on data node"

       `# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elastic-dev
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-2
node.master: true
node.data: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.208.yy.yy
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.208.xx.xx", "10.208.yy.yy"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["10.208.xx.xx", "10.208.yy.yy"]
discovery.zen.minimum_master_nodes: 2
#discovery.zen.ping.unicast.hosts: ["10.208.xx.xx", "10.208.yy.yy"]

#
# For more information, consult the discovery and cluster formation module documentation.

# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: instance/instance.key
xpack.security.http.ssl.certificate: instance/instance.crt
xpack.security.http.ssl.certificate_authorities: ca/ca.crt
xpack.security.transport.ssl.key: instance/instance.key
xpack.security.transport.ssl.certificate: instance/instance.crt
xpack.security.transport.ssl.certificate_authorities: ca/ca.crt

#xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
#xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
#xpack.security.authc.api_key.enabled: true
#

# This turns on SSL for the HTTP (Rest) interface
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: "http.p12"
#elasticsearch.ssl.certificateAuthorities: "ca.p12" `

Note1: I have added ca cert, instance cert on data node as well similar to mater node

Note2: When i did curl it says "Empty reply from server"(curl -k -l 10.208.xx.xx:9200>

My data node is also not showing as it connected with cluster when I did
"GET _cluster/health?pretty" from Kibana UI

Please let me know any changes need to be done

TIA

Uncomment the port and make sure the firewall ports are open on the VM. 9200/TCP. "Often forgot" on all nodes.

All 3 nodes will be data nodes in your setup which is a good thing btw. In your 3 node setup you don't need to directly call out master and data roles. Comment those lines out. Your setup is not using a coordinating node. Comment out discovery.zen.minium.master_nodes. I'm guessing this is not a licensed cluster and it will default to 1 master on the basic unless something changed recently.

  1. checked firewall on all nodes (sudo firewall-cmd --list-ports <o/p : 9200/TCP)
  2. commented node.master,node.data in both .yml files and also discovery.zen.minium.master_nodes)
  3. From /var/log/ , i could see this is a "basic" license

After restarting the service,

on Node1:

at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]



at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
   
   at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
   
   at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]

    at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-10-14T14:59:22,832][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started 

[[ilm-history-2-000001][0], [logstash-2020.09.22-000001][0]]]).

On Node2:

      tail -f /var/log/elasticsearch/elastic-dev.log

    [2020-10-14T14:57:52,469][TRACE][o.e.d.PeerFinder         ] [node-2] deactivating and setting leader to {node-2}{L9s_NdEjRY-XXXXXXXXXXX}{f731TrXXXXXXXXXXXXXXXXX}{10.208.yy.yy}{10.208.yy.yy:9300}{dilmrt}{ml.machine_memory=16791863296, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}

    [2020-10-14T14:57:52,470][TRACE][o.e.d.PeerFinder         ] [node-2] not active

    [2020-10-14T14:57:52,483][INFO ][o.e.c.s.MasterService    ] [node-2] elected-as-master 
    ([1] nodes joined)[{node-2}{L9s_NdEjRY-XXXXXXXXXXX}{f731TrXXXXXXXXXXXXXXXXX}

    {10.208.yy.yy}{10.208.yy.yy:9300}{dilmrt}{ml.machine_memory=16791863296, 
    xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, 
    _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 31, version: 123, delta: master 
    

`node changed {previous [], current [{node-2}{L9s_NdEjRY-XXXXXXXXXXX}`


    {f731TrXXXXXXXXXXXXXXXXX}{10.208.yy.yy}{10.208.yy.yy:9300}{dilmrt}

    {ml.machine_memory=16791863296, xpack.installed=true, transform.node=true, 
    ml.max_open_jobs=20}]}

    [2020-10-14T14:57:52,596][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node 
    changed {previous [], current [{node-2}{L9s_NdEjRY-XXXXXXXXXXX}
    {f731TrXXXXXXXXXXXXXXXXX}{10.208.yy.yy}{10.208.yy.yy:9300}{dilmrt}
    {ml.machine_memory=16791863296, xpack.installed=true, transform.node=true, 

    ml.max_open_jobs=20}]}, term: 31, version: 123, reason: Publication{term=31, 

    version=123}

    [2020-10-14T14:57:52,639][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] 

    publish_address {10.208.yy.yy:9200}, bound_addresses {10.208.yy.yy:9200}

    [2020-10-14T14:57:52,639][INFO ][o.e.n.Node               ] [node-2] started

    [2020-10-14T14:57:52,842][INFO ][o.e.l.LicenseService     ] [node-2] license [24a9f11a-

    f80e-42b9-97c1-b64c8dc086ee] mode [basic] - valid

    [2020-10-14T14:57:52,843][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-2] Active 

    license is now [BASIC]; Security is enabled

    [2020-10-14T14:57:52,851][INFO ][o.e.g.GatewayService     ] [node-2] recovered [0] 

    indices into cluster_state

    [2020-10-14T14:57:53,300][TRACE][o.e.d.PeerFinder         ] [node-2] not active

The nodes communicate on port 9300, so this need to be open between the nodes, not 9200.

i just opened 9300 on both nodes , did telnet shows "connected"

and when i did
curl http://10.208.yy.yy:9300/_cluster/health?pretty

O/P:

curl: (52) Empty reply from server

Note: "when i disabled Xpack , i got the cluster status in JSON (as expected) "

Add to the yml and see if you get any change.
xpack.security.transport.ssl.verification_mode: certificate

Nothing, still shows same

curl: (52) Empty reply from server

is it showing same o/p because of X-pack enabled ?

You need to curl on port 9200. But both 9200 and 9300 ports need to be opened.

Infact i opened both ports

$ sudo firewall-cmd --list-ports
9200/tcp 9300/tcp

and applied curl on both ports , still shows same

Did you add ssl?

Yes , i have added SSL , and identified the problem

the new node acts as separate cluster itself with own UUID, have executed below command on this node and it connected with the local cluster

elasticsearch-node detach-cluster

Now i see new data node connected with Master

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.