ElasticSearch 7.3 ClusterFormationFailureHelper master not discovered yet, this n ode has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes

I'm trying to setup 2 nodes using the same instance on a single server. I've been trying to base this on how our ElasticSearch 5.x server was configured but for the most recent version. When trying to start up each instance, I get the following error from both instances:

[2019-09-16T08:21:12,909][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-1}{p74rYkGIQx2_-cQwNJtrqA}{SMKpWfxmQQ-GtCJtk5rNIw}{localhost}{127.0.0.1:9300}{dim}{ml.machine_memory=16724197376, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [*ipAddress*:9300, *ipAddress*:9302] from hosts providers and [{node-1}{p74rYkGIQx2_-cQwNJtrqA}{SMKpWfxmQQ-GtCJtk5rNIw}{localhost}{127.0.0.1:9300}{dim}{ml.machine_memory=16724197376, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

elasticsearch.yml is configured like below (I've removed the parts referencing my server specifically and italicized them.

cluster.name: ${ES_CLUSTER}
node.name: ${ES_NODE_NAME}
node.master: ${MASTER_NODE_STATUS}
node.data: ${DATA_NODE_STATUS}
path.data: ${ES_WORK_PATH}
path.logs: ${ES_LOG_PATH}
network.host: *ipAddress*
http.port: ${ES_HTTP_PORT}
discovery.zen.ping.unicast.hosts: ${ES_CLUSTER_HOSTS}
cluster.initial_master_nodes: *["node-1", "node-2"]*
transport.host: localhost
transport.tcp.port: ${ES_TRANSPORT_PORT}
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
node.max_local_storage_nodes: 2

In my elasticsearch script, I added a command to run a setenv script, which is included below.

#!/bin/sh

HOST=$(hostname)

SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

# expects 7 arguments
# $1 == environment
# $2 == type (primary/backup)
# $3 == node_type (master/data)
# $4 == name
# $5 == transport_port
# $6 == http_port
# $7 == ssl_port

export ENV=$1
export TYPE=$2
export NODE_TYPE=$3
export ES_NAME=$4
export ES_TRANSPORT_PORT=$5
export ES_HTTP_PORT=$6
export ES_SSL_PORT=$7

export JAVA_HOME=/usr/bin

export PATH=$PATH:$JAVA_HOME/bin
export JAVA_OPTS=-server
export ES_NODE_NAME=$HOST-$ES_NAME
export ES_DATA_PATH=/data/software/searchdata/
export ES_WORK_PATH=${ES_DATA_PATH}tmp/
export ES_LOG_PATH=/data/logs/elasticsearch
export ES_CLUSTER=clusterName

export ES_HEAP_SIZE=8192

export ES_CLUSTER_HOSTS=*ipAddress*:9300,*ipAddress*:9302

export MASTER_NODE_STATUS="true"
export DATA_NODE_STATUS="true"

The arguments I give when starting separate instances are:
node-1: prod primary master 1 9300 9200 9202
node-2: prod primary data 2 9302 9204 9205

What am I missing or need to change?

Ok. I fixed my original problem by changing ES_CLUSTER_HOTS from using the server ip address to replacing them with 127.0.0.1:portNumber. Both nodes have started up, but I have lost the settings I had on the index for the number of shards and rolling over and deleting the index. Is there anywhere I can set that in elasticsearch or logstash, or do I just need to reset them in Kibana again?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.