Shield and a new node

I have an existing ES host. I've got Shield installed. I want a new host to share the load so I install ES, then the Shield plugin. Put the sys and

copied system_key to the new host.
Edited elasticsearch.yml with shield.system_key.file: /path/to/file
Edited elasticsearch.yml with the cluster name.

New Host can telnet to old host ipaddress 9200

And it appears New Host is not joining the cluster. Am I missing something? From reading it appears to me with the key the nodes should 'just join' .. unless I need to create duplicate users on each node?

[es@ris-webstats02 ~]$ /usr/share/elasticsearch/bin/elasticsearch
[2015-06-24 12:03:38,826][INFO ][node                     ] [Vulture] version[1.6.0], pid[31690], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-06-24 12:03:38,830][INFO ][node                     ] [Vulture] initializing ...
[2015-06-24 12:03:38,943][INFO ][plugins                  ] [Vulture] loaded [shield, license], sites []
[2015-06-24 12:03:39,023][INFO ][env                      ] [Vulture] using [1] data paths, mounts [[/ (/dev/mapper/VolGroup-lv_root)]], net usable_space [778.2mb], net total_space [6.4gb], types [ext4]
[2015-06-24 12:03:39,707][INFO ][transport                ] [Vulture] Using [org.elasticsearch.shield.transport.ShieldServerTransportService] as transport service, overridden by [shield]
[2015-06-24 12:03:39,707][INFO ][transport                ] [Vulture] Using [org.elasticsearch.shield.transport.netty.ShieldNettyTransport] as transport, overridden by [shield]
[2015-06-24 12:03:39,708][INFO ][http                     ] [Vulture] Using [org.elasticsearch.shield.transport.netty.ShieldNettyHttpServerTransport] as http transport, overridden by [shield]
[2015-06-24 12:03:45,583][INFO ][node                     ] [Vulture] initialized
[2015-06-24 12:03:45,584][INFO ][node                     ] [Vulture] starting ...
[2015-06-24 12:03:46,935][WARN ][shield.authc.esusers     ] [Vulture] no users found in users file [/usr/share/elasticsearch/config/shield/users]. use bin/shield/esusers to add users and role mappings
[2015-06-24 12:03:46,938][WARN ][shield.authc.esusers     ] [Vulture] no entries found in users_roles file [/usr/share/elasticsearch/config/shield/users_roles]. use bin/shield/esusers to add users and role mappings
[2015-06-24 12:03:47,112][INFO ][shield.transport         ] [Vulture] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.210.2.98:9300]}
[2015-06-24 12:03:47,151][INFO ][discovery                ] [Vulture] muostats/2AWazlkISzOQbmjAHbLYzQ
[2015-06-24 12:03:50,303][INFO ][cluster.service          ] [Vulture] detected_master [El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]], added {[El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]],}, reason: zen-disco-receive(from master [[El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]]])
[2015-06-24 12:03:50,329][INFO ][shield.license           ] [Vulture] enabling license for [shield]
[2015-06-24 12:03:50,330][INFO ][license.plugin.core      ] [Vulture] license for [shield] - valid
[2015-06-24 12:03:50,340][ERROR][shield.license           ] [Vulture]
#
# Shield license will expire on [Sunday, July 19, 2015]. Cluster health, cluster stats and indices stats operations are
# blocked on Shield license expiration. All data operations (read and write) continue to work. If you
# have a new license, please update it. Otherwise, please reach out to your support contact.
#
[2015-06-24 12:03:50,538][INFO ][http                     ] [Vulture] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.210.2.98:9200]}
[2015-06-24 12:03:50,539][INFO ][node                     ] [Vulture] started

Hi Brian,

Have you changed any other settings? Every node needs to have the same Shield configuration files (users, user_roles, system_key, etc). It doesn't look like the system key is the problem here. I do see the node as having joined the cluster or so it seems:

[2015-06-24 12:03:50,303][INFO ][cluster.service          ] [Vulture] detected_master [El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]], added {[El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]],}, reason: zen-disco-receive(from master [[El Aguila][CRxtnq6oR3Gg5EJXZI-6Qw][ris-webstats01][inet[/10.210.2.26:9300]]])

What behavior are you seeing that makes you think the nodes are not forming a cluster?

Also, It sounds like you are using multicast discovery, correct? If so, for Shield we only support unicast discovery.

Ah: you've explained a few things.

Every node needs to have the same Shield configuration files (users, user_roles, system_key, etc

I did not understand this: my impression was on joining a new node it was automagically 'get' the users, user roles and etc.

It sounds like you are using multicast discovery, correct?

Correct.

I did change over to unicast, watched the log file write something about 'creating shards' then it promptly ran out of disk space and the process fell over.

Now might be a good time to provision that second disk. I'll report back if we have issues.

I'm glad that I was able to offer some clarity.

I did not understand this: my impression was on joining a new node it was automagically 'get' the users, user roles and etc.

API based configuration and automatic distribution of users and roles among other Shield configuration is something that's on our roadmap and is targeted for our next major release.

Groovy - thanks.