User Authentication and xpack problem starting Elasticsearch

I have re-booted my cluster, but when I try to cat my nodes I get the following error:

{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Additionally, on re-boot I keep getting the following in my logs (Note: a ps -ef shows that the ES port is started)

[2016-10-13T16:35:45,080][ERROR][o.e.x.m.AgentService     ] [v5rc1@bb16] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
    at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.0.0-rc1.jar:5.0.0-rc1]
    at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.0.0-rc1.jar:5.0.0-rc1]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]

I am under the impression that this is a license issue where the license needs to get re-loaded, etc...what would you advise as the best path forward from here?

1 Like

I also just hit this upon deploying via kubernetes, hosted in aws.

I am getting the same exception Running ES 2 nodes in a dev environment after I installed xpack on 1 node.

I'm new to ES deployments so testing with ES 5.0.0-rc1 on mine and another developers windows machines.
Both nodes were seeing each other nicely until I installed xpack. Now they appear to be running but they don't report as being in the same cluster anymore.

Not sure its user authentication related though as I have configured xpack.security.enabled: false. Node with xpack installed shows the following in the log.

[2016-10-19T14:33:54,845][INFO ][o.e.b.BootstrapCheck     ] [Danny-Desktop-ES-5.0] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2016-10-19T14:34:04,719][INFO ][o.e.x.m.e.Exporters      ] [Danny-Desktop-ES-5.0] skipping exporter [default_local] as it isn't ready yet
[2016-10-19T14:34:04,719][ERROR][o.e.x.m.AgentService     ] [Danny-Desktop-ES-5.0] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
    at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.0.0-rc1.jar:5.0.0-rc1]
    at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.0.0-rc1.jar:5.0.0-rc1]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_102]

Anyone got any progress on this ?

regards,
Danny

The solution for us had to do with the RHEL 7 Firewall. You need to make sure that the port/protocol is not being blocked. In our case by checking FirewallD:

firewall-cmd --zone=<zone> --query-port=<port>/<protocol>

For example:
firewall-cmd --zone=private --query-port=9305/tcp

From here you can either create a new rule per the zone or turn off FirewallD

Ok I have started again with 2 nodes running locally.
2 ES 5.0.0-rc1 in separate folders with http.port: 9201 configured in second along with host:9301 in the unicast discovery config.
This works fine, the 2 nodes find each other and form a cluster. So surely this rules out firewall and port issues ?

The problem starts when I install x-pack on 1 node. and restart it.
Log file reports that it detects the master, the other node and has started... but then immediately after it reports master_left, reason failed to ping 3 times... then I see the exporter warnings and exceptions again and the node fails to start.

[2016-10-20T08:36:29,219][INFO ][o.e.c.s.ClusterService   ] [Danny-Desktop-ES-5.0-node2] detected_master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}, added {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: zen-disco-receive(from master [master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300} committed version [5]])
[2016-10-20T08:36:29,296][INFO ][o.e.h.HttpServer         ] [Danny-Desktop-ES-5.0-node2] publish_address {10.0.10.90:9201}, bound_addresses {10.0.10.90:9201}
[2016-10-20T08:36:29,296][INFO ][o.e.n.Node               ] [Danny-Desktop-ES-5.0-node2] started
[2016-10-20T08:36:30,299][INFO ][o.e.d.z.ZenDiscovery     ] [Danny-Desktop-ES-5.0-node2] master_left [{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}], reason [failed to ping, tried [3] times, each with  maximum [30s] timeout]
[2016-10-20T08:36:30,300][WARN ][o.e.d.z.ZenDiscovery     ] [Danny-Desktop-ES-5.0-node2] master left (reason = failed to ping, tried [3] times, each with  maximum [30s] timeout), current nodes: {{Danny-Desktop-ES-5.0-node2}{mS8oyd-gQLWoOWkW77VYhQ}{WU_yxRiJT_SIj_Yl-wAFSg}{10.0.10.90}{10.0.10.90:9301},}
[2016-10-20T08:36:30,301][INFO ][o.e.c.s.ClusterService   ] [Danny-Desktop-ES-5.0-node2] removed {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: master_failed ({Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300})
[2016-10-20T08:36:33,355][INFO ][o.e.c.s.ClusterService   ] [Danny-Desktop-ES-5.0-node2] detected_master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}, added {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: zen-disco-receive(from master [master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300} committed version [7]])
[2016-10-20T08:36:34,379][INFO ][o.e.d.z.ZenDiscovery     ] [Danny-Desktop-ES-5.0-node2] master_left [{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}], reason [failed to ping, tried [3] times, each with  maximum [30s] timeout]
[2016-10-20T08:36:34,380][WARN ][o.e.d.z.ZenDiscovery     ] [Danny-Desktop-ES-5.0-node2] master left (reason = failed to ping, tried [3] times, each with  maximum [30s] timeout), current nodes: {{Danny-Desktop-ES-5.0-node2}{mS8oyd-gQLWoOWkW77VYhQ}{WU_yxRiJT_SIj_Yl-wAFSg}{10.0.10.90}{10.0.10.90:9301},}
[2016-10-20T08:36:34,380][INFO ][o.e.c.s.ClusterService   ] [Danny-Desktop-ES-5.0-node2] removed {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: master_failed ({Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300})
[2016-10-20T08:36:35,953][INFO ][o.e.x.m.e.Exporters      ] [Danny-Desktop-ES-5.0-node2] skipping exporter [default_local] as it isn't ready yet
[2016-10-20T08:36:35,953][ERROR][o.e.x.m.AgentService     ] [Danny-Desktop-ES-5.0-node2] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
        at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.0.0-rc1.jar:5.0.0-rc1]
        at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.0.0-rc1.jar:5.0.0-rc1]
        at java.lang.Thread.run(Unknown Source) [?:1.8.0_102]

I also tried bringing down both nodes and installing x-pack on both, then bring back up and it worked.
Is it possible to push out x-pack in a rolling deployment or does it require a whole cluster restart ?

x-pack installation requires a full cluster restart.

1 Like

Running into the same issue. It seems that X-pack implements some basic authentication, so you have to configure your curl commands to include the login.

curl -u elastic -XGET 'http://localhost:9200/_cat/health?v'

Default password is changeme. There's a doc on changing passwords:

https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html