Setting up a Cluster

I have created a small cluster of 2 nodes. I am planning on adding more.

Here is my configuration for the master

node.name: node-1
node.roles: ["master", "data"]
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.108.0.4
network.bind_host: _site_


xpack.security.enabled: true

xpack.security.enrollment.enabled: true

xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["es1"]

http.host: 0.0.0.0

the configuration for the second node is as follows:

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch


xpack.security.enabled: true

xpack.security.enrollment.enabled: true

xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
discovery.seed_hosts: ["10.108.0.4:9300"]

http.host: 0.0.0.0

transport.host: 0.0.0.0

The issue is that 9200 on the master is still accessible via the internet. I do not want that to be. I want it accessible on the 10.108.x.x ip as that is the private IP.

es1:/etc/elasticsearch# netstat -nap | grep :9200
tcp        0      0 0.0.0.0:9200            0.0.0.0:*               LISTEN      107380/java

Second issue is that when I look at who is connected on port 9300 on my master and on the node list, it appears that the second node is connecting with its public IP and not its private ip.

Question - What am I missing?

I suggest you remove your public IPs (and change the passwd) from this post.

Thank you. I have removed all of that. I had to change the http and the transport hosts

1 Like

I got it to work now. Issues resolved!