Master receives anonymous cluster-shutdown request

I have a 1-master, 7-slave elasticsearch setup on amazon ec2. Recently the ec2 instance on which the master node was hosted failed and all applications were shutdown. It has since been restarted and its stable, but this failure has somehow affected the master node. Each time elasticsearch is started on the master node, it receives an anonymous shutdown request and proceeds to shutdown. There is no error message of any sort, just the log statement indicating that a shutdown request was received. I have ensured that none of the clients or browsers are connected to the cluster when the master is restarted, but I always end up with the same issue. The entire log is below. Any help debugging this will be appreciated.
NOTE: Ignore shard not allocating lines, this was because I had shutdown all the slaves to try and debug the problem and so there were no other nodes to allocate the shards to.

[2015-06-15 14:14:47,372][INFO ][node ] [NoDataMasterNode] version[1.5.2], pid[18672], build[62ff986/2015-04-27T09:21:06Z]
[2015-06-15 14:14:47,372][INFO ][node ] [NoDataMasterNode] initializing ...
[2015-06-15 14:14:47,372][DEBUG][node ] [NoDataMasterNode] using home [/d/d1/elasticsearch/elasticsearchMasterNode], config [/d/d1/elasticsearch/elasticsearchMasterNode/config], data [[/d/d1/elasticsearch/elasticsearchMasterNode/data]], logs [/d/d1/elasticsearch/elasticsearchMasterNode/logs], work [/mnt/elasticsearch/tmp], plugins [/d/d1/elasticsearch/elasticsearchMasterNode/plugins]
[2015-06-15 14:14:47,387][DEBUG][plugins ] [NoDataMasterNode] lucene property is not set in plugin es-plugin.properties file. Skipping test.
[2015-06-15 14:14:47,387][DEBUG][plugins ] [NoDataMasterNode] [/d/d1/elasticsearch/elasticsearchMasterNode/plugins/cloud-aws/_site] directory does not exist.
[2015-06-15 14:14:47,392][DEBUG][plugins ] [NoDataMasterNode] [/d/d1/elasticsearch/elasticsearchMasterNode/plugins/cloud-aws/_site] directory does not exist.
[2015-06-15 14:14:47,396][INFO ][plugins ] [NoDataMasterNode] loaded [cloud-aws], sites [bigdesk, head]
[2015-06-15 14:14:47,427][DEBUG][common.compress.lzf ] using encoder [VanillaChunkDecoder] and decoder[{}] 
[2015-06-15 14:14:47,440][DEBUG][env ] [NoDataMasterNode] using node location [[/d/d1/elasticsearch/elasticsearchMasterNode/data/dataindexer/nodes/0]], local_node_id [0]
[2015-06-15 14:14:49,288][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [generic], type [cached], keep_alive [30s]
[2015-06-15 14:14:49,295][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [index], type [fixed], size [2], queue_size [200]
[2015-06-15 14:14:49,297][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [bulk], type [fixed], size [2], queue_size [50]
[2015-06-15 14:14:49,297][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [get], type [fixed], size [2], queue_size [1k]
[2015-06-15 14:14:49,297][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [search], type [fixed], size [6], queue_size [1k]
[2015-06-15 14:14:49,298][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [suggest], type [fixed], size [2], queue_size [1k]
[2015-06-15 14:14:49,298][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [percolate], type [fixed], size [2], queue_size [1k]
[2015-06-15 14:14:49,298][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
[2015-06-15 14:14:49,299][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
[2015-06-15 14:14:49,299][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
[2015-06-15 14:14:49,300][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [merge], type [scaling], min [1], size [1], keep_alive [5m]
[2015-06-15 14:14:49,300][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
[2015-06-15 14:14:49,300][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
[2015-06-15 14:14:49,300][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
[2015-06-15 14:14:49,301][DEBUG][threadpool ] [NoDataMasterNode] creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
[2015-06-15 14:14:49,320][DEBUG][monitor.jvm ] [NoDataMasterNode] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}}]
[2015-06-15 14:14:49,828][DEBUG][monitor.os ] [NoDataMasterNode] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@7e8910c] with refresh_interval [1s]
[2015-06-15 14:14:49,832][DEBUG][monitor.process ] [NoDataMasterNode] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@31b0afa] with refresh_interval [1s]
[2015-06-15 14:14:49,836][DEBUG][monitor.jvm ] [NoDataMasterNode] Using refresh_interval [1s]
[2015-06-15 14:14:49,837][DEBUG][monitor.network ] [NoDataMasterNode] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@1b3b9b3] with refresh_interval [5s]
[2015-06-15 14:14:49,842][DEBUG][monitor.network ] [NoDataMasterNode] net_info
host [ip-10-220-7-14]
eth0 display_name [eth0]
address [/fe80:0:0:0:1057:3eff:fe4e:e24e%2] [/10.220.7.14] 
mtu [9001] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1] 
mtu [65536] multicast [false] ptp [false] loopback [true] up [true] virtual [false]

[2015-06-15 14:14:49,848][DEBUG][monitor.fs ] [NoDataMasterNode] Using probe [org.elasticsearch.monitor.fs.SigarFsProbe@39c2132] with refresh_interval [1s]
[2015-06-15 14:14:49,854][DEBUG][common.netty ] using gathering [true]
[2015-06-15 14:14:49,889][DEBUG][discovery.zen.elect ] [NoDataMasterNode] using minimum_master_nodes [-1]
[2015-06-15 14:14:49,892][DEBUG][discovery.zen.ping.unicast] [NoDataMasterNode] using initial hosts [], with concurrent_connects [10]
[2015-06-15 14:14:49,894][DEBUG][discovery.zen ] [NoDataMasterNode] using ping.timeout [3s], join.timeout [1m], master_election.filter_client [true], master_election.filter_data [false]
[2015-06-15 14:14:49,895][DEBUG][discovery.zen.fd ] [NoDataMasterNode] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-06-15 14:14:49,898][DEBUG][discovery.zen.fd ] [NoDataMasterNode] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-06-15 14:14:50,277][DEBUG][script ] [NoDataMasterNode] using script cache with max_size [100], expire [null]
[2015-06-15 14:14:50,369][DEBUG][cluster.routing.allocation.decider] [NoDataMasterNode] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2015-06-15 14:14:50,370][DEBUG][cluster.routing.allocation.decider] [NoDataMasterNode] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2015-06-15 14:14:50,371][DEBUG][cluster.routing.allocation.decider] [NoDataMasterNode] using [cluster_concurrent_rebalance] with [2]
[2015-06-15 14:14:50,373][DEBUG][indices.recovery ] [NoDataMasterNode] using max_bytes_per_sec[100mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2015-06-15 14:14:50,377][DEBUG][gateway.local ] [NoDataMasterNode] using initial_shards [quorum], list_timeout [30s]
[2015-06-15 14:14:50,470][DEBUG][http.netty ] [NoDataMasterNode] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[512kb->512kb], pipelining[true], pipelining_max_events[10000]
[2015-06-15 14:14:50,489][DEBUG][indices.store ] [NoDataMasterNode] using indices.store.throttle.type [MERGE], with index.store.throttle.max_bytes_per_sec [20mb]
[2015-06-15 14:14:50,490][DEBUG][indices.memory ] [NoDataMasterNode] using index_buffer_size [305.5mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2015-06-15 14:14:50,491][DEBUG][indices.cache.filter ] [NoDataMasterNode] using [node] weighted filter cache with size [10%], actual_size [305.5mb], expire [null], clean_interval [1m]
[2015-06-15 14:14:50,492][DEBUG][indices.fielddata.cache ] [NoDataMasterNode] using size [-1] [-1b], expire [null]
[2015-06-15 14:14:50,515][DEBUG][gateway.local.state.meta ] [NoDataMasterNode] using gateway.local.auto_import_dangled [YES], gateway.local.delete_timeout [30s], with gateway.local.dangling_timeout [2h]
[2015-06-15 14:14:50,569][DEBUG][gateway.local.state.meta ] [NoDataMasterNode] took 53ms to load state
[2015-06-15 14:14:50,573][DEBUG][bulk.udp ] [NoDataMasterNode] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2015-06-15 14:14:50,662][INFO ][node ] [NoDataMasterNode] initialized
[2015-06-15 14:14:50,670][INFO ][node ] [NoDataMasterNode] starting ...
[2015-06-15 14:14:50,703][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select timeout of 500
[2015-06-15 14:14:50,703][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug workaround enabled = false
[2015-06-15 14:14:50,729][DEBUG][transport.netty ] [NoDataMasterNode] using profile[default], worker_count[4], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
[2015-06-15 14:14:50,757][DEBUG][transport.netty ] [NoDataMasterNode] Bound profile [default] to address [/0:0:0:0:0:0:0:0:9300]
[2015-06-15 14:14:50,759][INFO ][transport ] [NoDataMasterNode] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.220.7.14:9300]}
[2015-06-15 14:14:50,769][INFO ][discovery ] [NoDataMasterNode] dataindexer/aYPnhc2KRBKD2X5lwJl7AQ
[2015-06-15 14:14:50,770][DEBUG][cluster.service ] [NoDataMasterNode] processing [initial_join]: execute
[2015-06-15 14:14:50,770][DEBUG][cluster.service ] [NoDataMasterNode] processing [initial_join]: no change in cluster_state
[2015-06-15 14:14:50,770][TRACE][discovery.zen ] [NoDataMasterNode] starting to ping
[2015-06-15 14:14:50,774][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] connecting to [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}
[2015-06-15 14:14:50,794][DEBUG][transport.netty ] [NoDataMasterNode] connected to node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}]
[2015-06-15 14:14:50,795][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] connected to [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}
[2015-06-15 14:14:50,795][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] sending to [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}
[2015-06-15 14:14:50,829][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] received response from [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}: [ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[1], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[2], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}]
[2015-06-15 14:14:54,775][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] sending to [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}
[2015-06-15 14:14:54,784][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] received response from [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}: [ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[1], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[3], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[4], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}]
[2015-06-15 14:14:58,784][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] sending to [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}
[2015-06-15 14:14:58,788][TRACE][discovery.zen.ping.unicast] [NoDataMasterNode] [1] received response from [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}: [ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[1], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[3], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[5], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}, ping_response{node [[NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}], id[6], master [null], hasJoinedOnce [false], cluster_name[dataindexer]}]
[2015-06-15 14:14:58,789][TRACE][discovery.zen ] [NoDataMasterNode] full ping responses: {none}
[2015-06-15 14:14:58,789][DEBUG][discovery.zen ] [NoDataMasterNode] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2015-06-15 14:14:58,790][DEBUG][cluster.service ] [NoDataMasterNode] processing [zen-disco-join (elected_as_master)]: execute
[2015-06-15 14:14:58,795][DEBUG][cluster.service ] [NoDataMasterNode] cluster state updated, version [1], source [zen-disco-join (elected_as_master)]
[2015-06-15 14:14:58,796][INFO ][cluster.service ] [NoDataMasterNode] new_master [NoDataMasterNode][aYPnhc2KRBKD2X5lwJl7AQ][ip-10-220-7-14][inet[/10.220.7.14:9300]]{data=false, max_local_storage_nodes=1, master=true}, reason: zen-disco-join (elected_as_master)
[2015-06-15 14:14:58,796][DEBUG][cluster.service ] [NoDataMasterNode] publishing cluster state version 1
[2015-06-15 14:14:58,797][DEBUG][cluster.service ] [NoDataMasterNode] set local cluster state to version 1
[2015-06-15 14:14:58,799][DEBUG][river.cluster ] [NoDataMasterNode] processing [reroute_rivers_node_changed]: execute
[2015-06-15 14:14:58,799][DEBUG][river.cluster ] [NoDataMasterNode] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-06-15 14:14:58,799][TRACE][discovery.zen ] [NoDataMasterNode] cluster joins counter set to 1
[2015-06-15 14:14:58,799][DEBUG][cluster.service ] [NoDataMasterNode] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state (version: 1)
[2015-06-15 14:14:58,819][DEBUG][cluster.service ] [NoDataMasterNode] processing [local-gateway-elected-state]: execute
[2015-06-15 14:14:58,834][INFO ][http ] [NoDataMasterNode] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.220.7.14:9200]}
[2015-06-15 14:14:58,834][INFO ][node ] [NoDataMasterNode] started
[2015-06-15 14:14:58,844][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,845][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,846][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:14:58,847][DEBUG][cluster.service ] [NoDataMasterNode] cluster state updated, version [2], source [local-gateway-elected-state]
[2015-06-15 14:14:58,848][DEBUG][cluster.service ] [NoDataMasterNode] publishing cluster state version 2
[2015-06-15 14:14:58,848][DEBUG][cluster.service ] [NoDataMasterNode] set local cluster state to version 2
[2015-06-15 14:14:58,849][DEBUG][river.cluster ] [NoDataMasterNode] processing [reroute_rivers_node_changed]: execute
[2015-06-15 14:14:58,849][DEBUG][river.cluster ] [NoDataMasterNode] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-06-15 14:14:58,872][INFO ][gateway ] [NoDataMasterNode] recovered [6] indices into cluster_state
[2015-06-15 14:14:58,872][DEBUG][cluster.service ] [NoDataMasterNode] processing [local-gateway-elected-state]: done applying updated cluster_state (version: 2)
[2015-06-15 14:15:08,797][DEBUG][cluster.service ] [NoDataMasterNode] processing [routing-table-updater]: execute
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,800][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,801][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,802][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][3]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,803][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][2]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h1][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][6]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,804][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [test_result][1]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][5]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2014h2][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [testdocument][0]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h2][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,805][DEBUG][gateway.local ] [NoDataMasterNode] [sourcedocument2015h1][4]: not allocating, number_of_allocated_shards_found [0], required_number [1]
[2015-06-15 14:15:08,806][DEBUG][cluster.service ] [NoDataMasterNode] processing [routing-table-updater]: no change in cluster_state
[2015-06-15 14:15:14,682][INFO ][action.admin.cluster.node.shutdown] [NoDataMasterNode] [cluster_shutdown]: requested, shutting down in [1s]
[2015-06-15 14:15:15,684][INFO ][action.admin.cluster.node.shutdown] [NoDataMasterNode] [cluster_shutdown]: done shutting down all nodes except master, proceeding to master
[2015-06-15 14:15:15,687][INFO ][action.admin.cluster.node.shutdown] [NoDataMasterNode] shutting down in [200ms]
[2015-06-15 14:15:15,888][INFO ][action.admin.cluster.node.shutdown] [NoDataMasterNode] initiating requested shutdown...
[2015-06-15 14:15:15,888][INFO ][node ] [NoDataMasterNode] stopping ...
[2015-06-15 14:15:15,902][INFO ][node ] [NoDataMasterNode] stopped
[2015-06-15 14:15:15,903][INFO ][node ] [NoDataMasterNode] closing ...
[2015-06-15 14:15:15,927][INFO ][node ] [NoDataMasterNode] closed

I'd install Shield and then turn on auditing to see where this is coming from.

What is the name of your ES Cluster?

@Harlin_ES The cluster name is dataindexer

@warkolm that is a good suggestion. We have moved on from this(see below comment for reason) but I will be installing this for future instances

[SOLUTION]Disabling the shutdown command would be more of a hack than a good solution as it just blocks the problem and also will block the automated monitoring tool in future if the system needs to be shutdown safely. I couldn't determine the cause of the problem but we were able to rule out elasticsearch as the source. It seems the shutdown was coming from a possibly corrupted instance. My alternative, zipped up the current state of elasticsearch and copied it over to a new instance. It is working perfectly on the new ec2 instance. Since that worked for me, I will mark this as answered. I would like to explore the cause further but I really do not have the cycles to spend on figuring out what corrupted the system.