One of nodes fails with anonymous_access_denied error in audit log

I have set up a 3-node cluster for Elasticsearch 6.7. When I start it up, two of the master-eligible nodes can communicate just fine, but the third complains with anonymous_access_denied events:

One of the master nodes has this in its elasticsearch log:

[2019-04-10T15:31:22,902][WARN ][o.e.d.z.PublishClusterStateAction] [ip-172-40-2-114.dev.fama.io] publishing cluster state with version [2269] failed for the following nodes: [[{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]]
[2019-04-10T15:31:22,902][INFO ][o.e.c.s.ClusterApplierService] [ip-172-40-2-114.dev.fama.io] added {{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {ip-172-40-2-114.dev.fama.io}{Bq2tw17rRa-Kf4mckSlzsQ}{SnL7wlIbTiu4BTWhc91Z0Q}{172.40.2.114}{172.40.2.114:9300}{ml.machine_memory=67532251136, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [2269] source [zen-disco-node-join[{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]]])
[2019-04-10T15:31:22,904][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [ip-172-40-2-114.dev.fama.io] failed to execute on node [CJQnhHxgRxmBj4IBZl6zjA]
org.elasticsearch.transport.RemoteTransportException: [ip-172-40-3-42.dev.fama.io][172.40.3.42:9300][cluster:monitor/nodes/stats[n]]
Caused by: org.elasticsearch.ElasticsearchSecurityException: missing authentication token for action [cluster:monitor/nodes/stats[n]]
	at org.elasticsearch.xpack.core.security.support.Exceptions.authenticationError(Exceptions.java:18) ~[?:?]
	at org.elasticsearch.xpack.core.security.authc.DefaultAuthenticationFailureHandler.createAuthenticationError(DefaultAuthenticationFailureHandler.java:163) ~[?:?]
	at org.elasticsearch.xpack.core.security.authc.DefaultAuthenticationFailureHandler.missingToken(DefaultAuthenticationFailureHandler.java:118) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$AuditableTransportRequest.anonymousAccessDenied(AuthenticationService.java:650) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$handleNullToken$19(AuthenticationService.java:466) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.handleNullToken(AuthenticationService.java:471) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.consumeToken(AuthenticationService.java:355) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$extractToken$9(AuthenticationService.java:326) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.extractToken(AuthenticationService.java:344) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$checkForApiKey$3(AuthenticationService.java:287) ~[?:?]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:61) ~[elasticsearch-6.7.1.jar:6.7.1]
	at org.elasticsearch.xpack.security.authc.ApiKeyService.authenticateWithApiKeyIfPresent(ApiKeyService.java:342) ~[?:?]
	at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.checkForApiKey(AuthenticationService.java:268) ~[?:?]
...
[2019-04-10T15:31:22,908][INFO ][o.e.c.s.MasterService    ] [ip-172-40-2-114.dev.fama.io] zen-disco-node-failed({ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)[{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} failed to ping, tried [3] times, each with maximum [30s] timeout], reason: removed {{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}
[2019-04-10T15:31:22,910][INFO ][o.e.c.s.ClusterApplierService] [ip-172-40-2-114.dev.fama.io] removed {{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {ip-172-40-2-114.dev.fama.io}{Bq2tw17rRa-Kf4mckSlzsQ}{SnL7wlIbTiu4BTWhc91Z0Q}{172.40.2.114}{172.40.2.114:9300}{ml.machine_memory=67532251136, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [2270] source [zen-disco-node-failed({ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)[{ip-172-40-3-42.dev.fama.io}{CJQnhHxgRxmBj4IBZl6zjA}{94Fo-YZuQVuEvwjHZGmQbg}{172.40.3.42}{172.40.3.42:9300}{ml.machine_memory=67532251136, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} failed to ping, tried [3] times, each with maximum [30s] timeout]]])

While the not-working node started up but the audit log shows:

{"@timestamp":"2019-04-10T15:32:35,754", "node.name":"ip-172-40-3-42.dev.fama.io", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"transport", "origin.address":"172.40.2.114:39374", "request.id":"o7BiMBWtT5uZf8SN-M7aPA", "action":"cluster:monitor/nodes/stats[n]", "request.name":"NodeStatsRequest"}
{"@timestamp":"2019-04-10T15:32:35,756", "node.name":"ip-172-40-3-42.dev.fama.io", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"transport", "origin.address":"172.40.2.114:39366", "request.id":"aX5BSWGzTLm8iHzCa5ua7A", "action":"internal:discovery/zen/fd/ping", "request.name":"PingRequest"}
{"@timestamp":"2019-04-10T15:32:35,757", "node.name":"ip-172-40-3-42.dev.fama.io", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"transport", "origin.address":"172.40.2.114:39366", "request.id":"crbpkrXuRPumbjI0g9jfgQ", "action":"internal:discovery/zen/fd/ping", "request.name":"PingRequest"}

Configures between all three are exactly the same since I used a terraform script to build it as an auto-scaling group in AWS. Any ideas?

Thanks.

From the limited info available, it looks like you are running with a license level ("basic") that doesn't include security, but have a configuration option that is trying to enable security.
What happens then is:

  • All nodes start up with security enabled
  • When you get enough nodes to form a quorum, they bring up the cluster state, and discover (or generate) a basic license
  • Because that license does not permit security they disable security
  • Nodes that were not in that initial quorum will not know that security has been disabled due to licensing, and will expect to be joining a cluster that has security enabled.
  • This fails.

If you can provide your elasticsearch.yml we can pin-point why that's happening.

1 Like

Yes, that indeed seemed to be the case. Once I disabled security it behaved appropriately. I will be revisiting this configuration if/when we upgrade our license.

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.