Elasticsearch doesn't join the cluster back after restart

I have 3 node cluster whose config is as below:

Node1:

discovery.zen.ping.unicast.hosts: ["10.0.0.57"]
discovery.zen.minimum_master_nodes: 2

Node2:

discovery.zen.ping.unicast.hosts: ["10.0.0.57","10.0.0.58"]
discovery.zen.minimum_master_nodes: 2

Node3:

discovery.zen.ping.unicast.hosts: ["10.0.0.57","10.0.0.58","10.0.0.59"]
discovery.zen.minimum_master_nodes: 2

When either Node 2 or Node 3 goes for a restart, it comes back up and joins the cluster fine but the problem occurs when Node 1 goes for a restart. Node 1 comes up fine but will fail to join the cluster. I see that the unicast hosts in Node 1 just has the IP of itself but when other nodes come up, wouldn't the cluster now be aware of all the nodes and when any node goes down wouldn't other nodes try pinging the failed node and act accordingly? Could somebody please clarify if am doing anything wrong here. BTW the version of elasticsearch i am using is 2.1.1.

Nodes that are running do not try to find new nodes. As far as I know the unicast list is used only when a node is started up, and all nodes therefore need to have the IP of other nodes in their list in order to be able to find at least one node in the cluster when they are started/restarted.

But in my case how can I know beforehand the IP address of Node 2 & Node 3? When I am starting 1st instance, all I have is just current IP and when starting 2nd instance I have the first IP and itself's and the same case applies for subsequent nodes. So are you saying Elasticsearch doesn't handle this case at all?
What about in an environment where you dynamically need to add/remove nodes based on the demand/traffic? Obviously the initial nodes wouldn't be aware of the later nodes joining the cluster and once they join and for some reason the initial nodes goes down, there is no way for them to come back again and join the cluster?
Just a note that cassandra handles this fine and expected Elasticsearch too would do the same.

Once you have started the second node, you could update the config of the first one and then restart it.

In most deployments I have come across, there are generally a number of constant nodes, e.g. dedicated master nodes, that all the other nodes can have in the multicast list. AS long as one of these can be found, nodes can easily be added and removed.