Trobling with HTTPS configuration

I know, it was my bad, I copied the line from the guide but didn't change the name to the name of my file.

This is the config of all nodes:

Master Config YML:

cluster.name: elasticppo
node.name: ${HOSTNAME}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: ["127.0.0.1","192.168.1.142"]
discovery.zen.ping.unicast.hosts: ["192.168.1.142","192.168.1.144","192.168.1.146"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elasticmaster.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elasticmaster.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elasticmaster.p12 
xpack.security.http.ssl.truststore.path: certs/elasticmaster.p12 

Slave 1 Config YML:

cluster.name: elasticppo
node.name: ${HOSTNAME}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: ["127.0.0.1","192.168.1.144"]
discovery.zen.ping.unicast.hosts: ["192.168.1.142","192.168.1.144","192.168.1.146"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elasticslave1.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elasticslave1.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elasticslave1.p12 
xpack.security.http.ssl.truststore.path: certs/elasticslave1.p12 

Slave 2 Config YML:

cluster.name: elasticppo
node.name: ${HOSTNAME}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: ["127.0.0.1","192.168.1.146"]
discovery.zen.ping.unicast.hosts: ["192.168.1.142","192.168.1.144","192.168.1.146"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elasticslave2.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elasticslave2.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elasticslave2.p12 
xpack.security.http.ssl.truststore.path: certs/elasticslave2.p12 

Do you actially have this section duplicated in each node , or is it just copy paste error ?

xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elasticslave2.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elasticslave2.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elasticslave2.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elasticslave2.p12

Can you please ssh into every node and make sure that all your .p12 files are owned by the elasticsearch user? As in

chown -R elasticsearch /etc/elasticsearch/certs/

Was Mistake copying lines, already fixed.

I did chown in all nodes, and now seems like now it's working, only accept connection via HTTPS, but if I try
GET _cat/indices
Returns:

{
error: {
root_cause: [
{
type: "master_not_discovered_exception",
reason: null,
}
],
type: "master_not_discovered_exception",
reason: null,
},
status: 503,
}

I added node.master: true in Master node config but nothing.

we can't guess what might be wrong with your configuration if you don't share your logs with us :slight_smile:

As you know It's a extense file, if I paste all here I won't be able to answer more until tomorrow, so I will share it with a link:
Master Logs
Slave 2 logs
Slave 1 logs
The pass to view all files is elastic
Thanks a lot

[2019-06-27T15:40:34,530][WARN ][o.e.d.z.ZenDiscovery     ] [elasticmaster] not enough master nodes discovered during pinging (found [[Candidate{node={elasticmaster}{RhMwLe4rQO-oKoBjUxHFSw}{LhNDr75LTVSrQex0J-OZbA}{192.168.1.142}{192.168.1.142:9300}{ml.machine_memory=8363855872, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2019-06-27T15:47:06,671][WARN ][o.e.d.z.ZenDiscovery     ] [elasticslave2] not enough master nodes discovered during pinging (found [[Candidate{node={elasticmaster}{RhMwLe4rQO-oKoBjUxHFSw}{LhNDr75LTVSrQex0J-OZbA}{192.168.1.142}{192.168.1.142:9300}{ml.machine_memory=8363855872, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2019-06-27T15:48:36,020][WARN ][o.e.d.z.ZenDiscovery     ] [elasticslave1] not enough master nodes discovered during pinging (found [[Candidate{node={elasticmaster}{RhMwLe4rQO-oKoBjUxHFSw}{LhNDr75LTVSrQex0J-OZbA}{192.168.1.142}{192.168.1.142:9300}{ml.machine_memory=8363855872, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again

looks like your nodes can only connect to 192.168.1.142 but not to 192.168.1.144 and/or 192.168.1.146. Can you ping 192.168.1.144 and 192.168.1.146 from all nodes ? Is port 9300 blocked by a firewall ?

I used ping command in all node to the rest nodes, and all works good. Example:
From 192.168.1.142 to 192.168.144 , 192.168.1.142 to 192.168.1.146 etc.

Not, I think so. Because UFW and other firewall isn't installed or even activated in all nodes.

Now I used lsof -i -P -n | grep LISTEN and this is the output:

Master .142

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1319/nginx: master  
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      735/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1178/sshd           
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      2317/node           
tcp6       0      0 192.168.1.142:9200      :::*                    LISTEN      2103/java           
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      2103/java           
tcp6       0      0 :::80                   :::*                    LISTEN      1319/nginx: master  
tcp6       0      0 192.168.1.142:9300      :::*                    LISTEN      2103/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      2103/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1178/sshd 

Slave 1 .144

tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      696/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1067/sshd           
tcp6       0      0 192.168.1.144:9200      :::*                    LISTEN      1835/java           
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      1835/java           
tcp6       0      0 192.168.1.144:9300      :::*                    LISTEN      1835/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      1835/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1067/sshd 

Slave 2 .146

tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      675/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1102/sshd           
tcp6       0      0 192.168.1.146:9200      :::*                    LISTEN      1856/java           
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      1856/java           
tcp6       0      0 192.168.1.146:9300      :::*                    LISTEN      1856/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      1856/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1102/sshd     

9300 and 9200 is open to listen from any IP, right?

Thanks a lot! I fixed it with command
ufw allow 9300/tcp in Slave 1 and Slave 2. For now I'm disabling ufw in each node to know which one needs ufw and why.
Now it is connected with via HTTPS with Password and user, now have to know how to configurate users, but it isn't for this post and first must to investigate by my own.

Thanks!

1 Like

Hi Schenier,
.put("xpack.security.transport.ssl.verification_mode","certificate")
adding this property would solve the above error.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.