Coordinator node authentication problem

Hi,

I have a multi-node elasticsearch installation(3 master nodes at the same time data nodes) and a coordinator node with kibana instance on it.

I executed setup_passwords script on master node, I can authenticate with generated passwords in master and data nodes but couldn't in coordinator node. So coordinator node doesn't work. I did configuration in kibana.yml as;

elasticsearch.url: "http://localhost:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "generated_password"

Authentication test;

$ curl -u kibana 'http://localhost:9200/_xpack/security/_authenticate?pretty'
Enter host password for user 'kibana': generated_password
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "failed to authenticate user [kibana]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "failed to authenticate user [kibana]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}

Also in master node;
curl -X GET "localhost:9200/_nodes"

displays node number as 3, it doesn't count coordinator node. Is it normal?

It sounds like the coordinator node may not have joined the cluster. What is the configuration of the coordinating node? Does discovery.zen.ping.unicast.hosts contain the IP addresses of all master eligible nodes?

Yes discovery.zen.ping.unicast.hosts contains master eligables. Coordinator node config;

cluster.name: elkcluster
node.name: ${HOSTNAME}
discovery.zen.ping.unicast.hosts: ["master01", "master02", "master03"]
http.port: 9200
transport.tcp.port: 9300
bootstrap.memory_lock: true
node.master: false
node.data: false
node.ingest: false
node.ml: false
search.remote.connect: false
path:
  data:
    - /confdir
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true

What does the config of the other nodes look like? Any firewall preventing access between the nodes?

master01;

cluster.name: elkcluster
node.name: ${HOSTNAME}
discovery.zen.ping.unicast.hosts: ["master01", "master02", "master03"]
http.port: 9200
transport.tcp.port: 9300
bootstrap.memory_lock: true
node.master: true
node.data: true
search.remote.connect: false
node.ingest: false
node.ml: false
xpack.ml.enabled: true
discovery.zen.minimum_master_nodes: 2
node.attr.box_type: warm
path:
  data:
    - /data01
    - /data02
    - /data03
    - /data04
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true

master02;

cluster.name: elkcluster
node.name: ${HOSTNAME}
discovery.zen.ping.unicast.hosts: ["master01", "master02", "master03"]
http.port: 9200
transport.tcp.port: 9300
bootstrap.memory_lock: true
node.master: true
node.data: true
search.remote.connect: false
node.ingest: false
node.ml: true
xpack.ml.enabled: true
discovery.zen.minimum_master_nodes: 2
node.attr.box_type: hot
path:
  data:
    - /data01
    - /data02
    - /data03
    - /data04
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true

master03;

cluster.name: elkcluster
node.name: ${HOSTNAME}
discovery.zen.ping.unicast.hosts: ["master01", "master02", "master03"]
http.port: 9200
transport.tcp.port: 9300
bootstrap.memory_lock: true
node.master: true
node.data: true
search.remote.connect: false
node.ingest: false
node.ml: true
xpack.ml.enabled: true
discovery.zen.minimum_master_nodes: 2
node.attr.box_type: hot
path:
  data:
    - /data01
    - /data02
    - /data03
    - /data04
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true

They are all in same subnet, there is no firewall issue.

As far as I can see that looks fine. Can you try telnet to port 9300 of the master nodes from the coordinating node?

They can do telnet. Are there any cluster join procedure in elasticsearch? Or is config yml file enough?

Problem was about a second non-routable interface at fourth node. It is fixed now. Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.