Https://community.bitnami.com/t/problem-to-login-to-elk-stack-at-aws/49257/13

Hi, I have been trying to get 1 Elasticsearch Cluster Master, built on the Bitnami ELK Stack, and 2 data nodes, built on Bitnami's Elasticsearch AMI (AWS machine image) to join as a cluster for weeks. I posted link above so you can see the issues I have been having. Please feel free to scroll down to when Support first responded to my questions.

In tried first the documentation on Elastic.co for building a cluster and Bitnami's Documentation too. I ran the elasticsearch.yml for the Cluster Master and the cluster data nodes through a yaml linter, until I could get the noes to at least boot with some modifications to the elasticsearch.yml

Here is the elasticsearch.yml for the initial_master:

http:

port: 9200

path:

data: /bitnami/elasticsearch/data

transport:

tcp:

port: 9300

action:

destructive_requires_name: true

network:

host: ["127.0.0.1", "172.NN.NN.NN"]

publish_host: ["127.0.0.1", "172.NN.NN.NN"]

bind_host: ["127.0.0.1", "172.31.80.99"]

cluster:

name: bnCluster

initial_master_nodes: ip-172-NN-NN-NN.

the private network AWS hostname within the VPC they all reside in. I opened the ICMP ports #and they can all ping each other

node:

name: ip-172-NN-NN-NN

master: true

data: true

ingest: false

discovery:

seed_hosts: ["ip-172--NN-NN", "172.NN.NN.NN", "172.NN.NN.NN"]

THE initial_cluster_master and the two freshly deployed default elasticsearch data nodes

initial_state_timeout: 5m

gateway:

recover_after_nodes: 1

expected_nodes: 1

xpack:

ml:

enabled: false. # Bitnami instructions say to turn this off when getting to nodes to join

and here is the Elasticsearch.yml on one of the two bitnami Elasticsearch data nodes

ttp:
port: 9200
path:
data: /bitnami/elasticsearch/data
transport:
tcp:
port: 9300
action:
destructive_requires_name: true
network:
host: ["127.0.0.1", "172.NN.NN.NN"]
publish_host: ["127.0.0.1", "172.NN.NN.NN"]
bind_host: ["127.0.0.1", "172.NN.NN.NN"]
cluster:
name: bnCluster
initial_master_nodes: ["ip-172-NN-NN-NN1", "172.NN.NN.NN", "172.NN.NN.NN"]

THE $ES_HOME/data/nodes subdirectories were deleted after the first login to start fresh

as per Bitnami directions.

node:
name: ip-172-NN-NN-NN
master: true
data: true
ingest: false
discovery:
seed_hosts: ["ip-172-NN-NN-NN", "172.NN.NN.NN", "172.NN.NN.NN"]
initial_state_timeout: 5m
gateway:
recover_after_nodes: 1
expected_nodes: 1
xpack:
ml:
enabled: false

THE elasticsearch and Kibana servers were stopped before editing the files, and started after.

This is a default stock image with no changes other than this.

The AW security groups ports are all open between my internet IP and ports 22, 9200, 9300, 5601
and these hosts are all within AWS on private subnets with all ports open. The proof of this is that all deployments work standalone, they just won't cluster.

Thanks, Rhino

Welcome to our community! :smiley:

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

Mark, Thanks for your input, I will be glad to comply. Just so I'm clear are you saying to put "<" and />." around logs, so that they are easier to separate from my words? I think so.

Just a word to the wise. I was having that "Kibana not ready yet" error, and fortunately I didn't delete all my indices... the issue was specifically opening port 5601 to the private IPs of my nodes. I did this on a wild hunch, but it doesn't seem like it should be necessary to me, because if I allowed (by the SG) to access port 5601 from my Internet address it seems to me that the Elasticsearch stack should be able to reach port 9200 inside the server via the system buss... This problem (Kibana not ready yet) began when I added 2 nodes to the desired ClusterMaster... perhaps because Kibana is aware that there are other nodes out there, even if I didn't specify them in Kibana.yml. Does that make sense Mark?

Cheers, Rhino

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.