Cluster not forming multi-master eligible. Each forming 1 node cluster with itself as master

I have three nodes setup running oss-7.8.1 on CentOS 7 with SELinux Disabled

I'm just trying to do a very basic initial setup as this is my first time playing with it. I have been reading different walkthroughs as well as documentation and am not sure what I'm missing.

I setup 3 nodes identically with puppet and made them so they could each be master eligible. When the services start I curl the API to see the cluster status and they each only show 1 node in the cluster with themselves as the master.

Here is my config... The servers use DNS & I've tested resolution of the host names. I've changed he hostnames for purpose of sharing online.

### MANAGED BY PUPPET ###
---
cluster.initial_master_nodes:
- elastic-host01
- elastic-host02
- elastic-host03
cluster.name: elasticsearch-development
discovery.seed_hosts:
- elastic-host01
- elastic-host02
- elastic-host03
network.host: 0.0.0.0
node.master: true
node.data: true
node.name: elastic-host01-elastic-host01
path.data: "/var/lib/elasticsearch/elastic-host01"
path.logs: "/var/log/elasticsearch/elastic-host01"`

Same config on each host

Here is curl output from one of my hosts

curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
  "cluster_name" : "elasticsearch-development",
  "cluster_uuid" : "TNNUNFFZQfaFNqbMjyJw2g",
  "version" : 21,
  "state_uuid" : "xuOwF_C9Rk6zolv965arww",
  "master_node" : "1K7aq5XdRf6-P30-Veo7Qg",
  "blocks" : { },
  "nodes" : {
    "1K7aq5XdRf6-P30-Veo7Qg" : {
      "name" : "elastic-host01-elastic-host01",
      "ephemeral_id" : "KoWtPrFYR2iwCRGzgz8ruw",
      "transport_address" : "192.168.5.130:9300",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "TNNUNFFZQfaFNqbMjyJw2g",
    "cluster_uuid_committed" : true,
    "cluster_coordination" : {
      "term" : 12,
      "last_committed_config" : [
        "1K7aq5XdRf6-P30-Veo7Qg"
      ],
      "last_accepted_config" : [
        "1K7aq5XdRf6-P30-Veo7Qg"
      ],
      "voting_config_exclusions" : [ ]
    },
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : {
      "1K7aq5XdRf6-P30-Veo7Qg" : [ ]
    }
  }
}
curl -XGET 'http://localhost:9200/_cat/nodes?'
196.158.5.130 9 81 0 0.00 0.01 0.05 dimr * elastic-host01-elastic-host01'

Any help would be greatly appreciated.. I feel it's really silly. Thank you!

You must have some data in your nodes' data paths from a previous config in which they were the only node. Wipe the data path and you should be good to go. Possibly this note in the docs is what you're looking for.

BTW don't put your logs in your data path.

Thank you for that info! I will try that out in a moment. I must have read through that page 50 times. I don't know how I missed the line I needed. Thanks again and I'll comment back if it solves my issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.