How to install and setup x-pack on two node with one cluster?

I have configured one cluster with two nodes, one is master node and another one is a data node. Now, have to install and configure xpack on these nodes,

Master Node:
cluster.name: ELK_master
node.name: node-master
node.data: false
node.ingest: false
node.master: true
network.host: 172.xx.xx.16

Data Node:
cluster.name: ELK_master
node.name: node-data
node.data: true
node.ingest: false
node.master: false
network.host: 110.xx.xx.108

Have you been through https://www.elastic.co/guide/en/elasticsearch/reference/6.2/setup-xpack.html?

Why would you set up a cluster this way? With this setup you now have two points of failure instead of one.

I know the xpack installation on ELK. My question is how do I configure one cluster name with two nodes, these nodes from the different ec2 machine. do you understand what im asking?

I'm learning ELK with multiple nodes. please, can you explain to me what I made mistake?

You have provided very little information about what is not working. Have you configured discovery settings so the nodes can find eachother?

This is my live code, working fine
Master Node:
cluster.name: ELK_master
node.name: node-master
node.data: false
node.ingest: false
node.master: true
bootstrap.memory_lock: true
network.host: 172.xx.xx.16
http.port: 9200
transport.tcp.port: 9300
path.data: D:/ELK/elasticsearch-6.2.2/data
path.logs: D:/ELK/elasticsearch-6.2.2/logs
discovery.zen.ping.unicast.hosts: ["110.xx.xx.108:9300"]

Data Node:
cluster.name: ELK_master
node.name: node-data
node.data: true
node.ingest: false
node.master: false
bootstrap.memory_lock: true
network.host: 110.xx.xx.108
http.port: 9200
transport.tcp.port: 9300
path.data: D:/ELK/elasticsearch-6.2.2/data
path.logs: D:/ELK/elasticsearch-6.2.2/logs
discovery.zen.ping.unicast.hosts: ["172.xx.xx.16:9300"]

That looks fine. If this is working well, what is the problem?

If I install xpack to those two nodes, how it will communicate with each other nodes?
For example:

  1. Master node protected with username elastic and password master
  2. Data node protected with username elastic and password datanode

Security is managed cluster wide, so all nodes would have the same users and passwords. You do not have different credentials per node.

So, have to set same username and password for all nodes while installing xpack(elasticsearch and kibana), right?

When you create users and roles in Kibana or through the APIs, these are stored in the cluster state and distributed to all nodes in the cluster. You therefore just set up each user/role once.

Step 1: install elasticsearch on master and data node
Step 2: install x-pack on Master node
Step 3: Master Node - Restart elasticsearch and set up x-pack password

D:\ELK\elasticsearch-6.2.3\bin\x-pack>setup-passwords auto
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Connection failure to: http://34.xx.xx.xx:9200/_xpack/security/user/kibana/_password?pretty failed: Read timed out 
ERROR: Failed to set password for user [kibana].

Step 4: Data Node - Restart elasticsearch and install x-pack, again restart elasticsearch and setup password

D:\ELK\elasticsearch-6.2.3\bin\x-pack>setup-passwords auto
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system.

Changed password for user kibana
PASSWORD kibana = LIXWOpQvVXRAsSjwe8ob

Changed password for user logstash_system
PASSWORD logstash_system = dJLYCTi9ZrHT34azxmms

Changed password for user elastic
PASSWORD elastic = C7hOKNU4Kol1hGc0DmRj

Reffered from here and point 1 to 4, How do i fix step 3 issue?

You don't run setup-passwords on each node individually, you run it once for the whole cluster. You don't need to fix step 3, because you've already run step 4.

The reason step 3 is failing is because you only have a master node online, and no data nodes. The password change cannot be processed if there are no data nodes available to store the new passwords.

Your cluster topology is very strange, and is going to cause these sorts of problems. Why are you trying to run a dedicated master in a 2 node cluster?

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.