A problem: ClusterBlockException : blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];

This error occurred when i restart cluster.

Debug log:
[2015-04-01 11:25:09,743][DEBUG][node ] [es_node_4_1]
using home [/home/nat/elasticsearch-1.1.2], config
[/home/nat/elasticsearch-1.1.2/config], data [[/home/nat/esdata.d,
/natlog1/nat/esdata.d, /natlog4/nat/esdata.d, /natlog5/nat/esdata.d,
/natlog6/nat/esdata.d, /natlog7/nat/esdata.d, /natlog8/nat/esdata.d]], logs
[/home/nat/elasticsearch-1.1.2/logs], work
[/home/nat/elasticsearch-1.1.2/work], plugins
[/home/nat/elasticsearch-1.1.2/plugins]
[2015-04-01 11:25:09,765][INFO ][plugins ] [es_node_4_1]
loaded [], sites [head, bigdesk]
[2015-04-01 11:25:09,775][DEBUG][common.compress.lzf ] using
[UnsafeChunkDecoder] decoder
[2015-04-01 11:25:09,787][DEBUG][env ] [es_node_4_1]
using node location [[/home/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog1/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog4/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog5/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog6/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog7/nat/esdata.d/elasticsearch_log/nodes/0,
/natlog8/nat/esdata.d/elasticsearch_log/nodes/0]], local_node_id [0]
[2015-04-01 11:25:09,825][INFO ][node ] [es_node_4_2]
version[1.1.2], pid[53749], build[e511f7b/2014-05-22T12:27:39Z]
[2015-04-01 11:25:09,826][INFO ][node ] [es_node_4_2]
initializing ...
[2015-04-01 11:25:09,826][DEBUG][node ] [es_node_4_2]
using home [/home/nat/elasticsearch-1.1.2], config
[/home/nat/elasticsearch-1.1.2/config], data [[/home/nat/esdata.d,
/natlog2/nat/esdata.d, /natlog3/nat/esdata.d, /natlog9/nat/esdata.d,
/natlog10/nat/esdata.d, /natlog11/nat/esdata.d, /natlog12/nat/esdata.d,
/natlog13/nat/esdata.d]], logs [/home/nat/elasticsearch-1.1.2/logs], work
[/home/nat/elasticsearch-1.1.2/work], plugins
[/home/nat/elasticsearch-1.1.2/plugins]
[2015-04-01 11:25:09,862][INFO ][plugins ] [es_node_4_2]
loaded [], sites [head, bigdesk]
[2015-04-01 11:25:09,879][DEBUG][common.compress.lzf ] using
[UnsafeChunkDecoder] decoder
[2015-04-01 11:25:09,901][DEBUG][env ] [es_node_4_2]
using node location [[/home/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog2/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog3/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog9/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog10/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog11/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog12/nat/esdata.d/elasticsearch_log/nodes/1,
/natlog13/nat/esdata.d/elasticsearch_log/nodes/1]], local_node_id [1]
[2015-04-01 11:25:11,258][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [generic], type [cached], keep_alive [30s]
[2015-04-01 11:25:11,286][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [index], type [fixed], size [32], queue_size [200]
[2015-04-01 11:25:11,294][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [bulk], type [fixed], size [20], queue_size [32]
[2015-04-01 11:25:11,295][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [get], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,296][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [search], type [fixed], size [96], queue_size [1k]
[2015-04-01 11:25:11,297][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [suggest], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,297][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [percolate], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,298][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,300][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [flush], type [scaling], min [1], size [5], keep_alive
[5m]
[2015-04-01 11:25:11,301][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [merge], type [fixed], size [4], queue_size [32]
[2015-04-01 11:25:11,302][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2015-04-01 11:25:11,303][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [warmer], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,304][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,305][DEBUG][threadpool ] [es_node_4_1]
creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
[2015-04-01 11:25:11,348][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [generic], type [cached], keep_alive [30s]
[2015-04-01 11:25:11,348][DEBUG][transport.netty ] [es_node_4_1]
using worker_count[64], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
[2015-04-01 11:25:11,357][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [index], type [fixed], size [32], queue_size [200]
[2015-04-01 11:25:11,362][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [bulk], type [fixed], size [20], queue_size [32]
[2015-04-01 11:25:11,363][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [get], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,364][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [search], type [fixed], size [96], queue_size [1k]
[2015-04-01 11:25:11,364][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [suggest], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,365][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [percolate], type [fixed], size [32], queue_size [1k]
[2015-04-01 11:25:11,366][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,367][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [flush], type [scaling], min [1], size [5], keep_alive
[5m]
[2015-04-01 11:25:11,368][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [merge], type [fixed], size [4], queue_size [32]
[2015-04-01 11:25:11,368][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2015-04-01 11:25:11,369][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [warmer], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,370][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2015-04-01 11:25:11,371][DEBUG][threadpool ] [es_node_4_2]
creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
[2015-04-01 11:25:11,376][DEBUG][discovery.zen.ping.unicast] [es_node_4_1]
using initial hosts [10.60.9.4:9300, 10.60.9.4:9301, 10.60.9.5:9300,
10.60.9.5:9301], with concurrent_connects [10]
[2015-04-01 11:25:11,380][DEBUG][discovery.zen ] [es_node_4_1]
using ping.timeout [3s], master_election.filter_client [true],
master_election.filter_data [false]
[2015-04-01 11:25:11,381][DEBUG][discovery.zen.elect ] [es_node_4_1]
using minimum_master_nodes [3]
[2015-04-01 11:25:11,382][DEBUG][discovery.zen.fd ] [es_node_4_1]
[master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-04-01 11:25:11,398][DEBUG][discovery.zen.fd ] [es_node_4_1]
[node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-04-01 11:25:11,400][DEBUG][transport.netty ] [es_node_4_2]
using worker_count[64], port[9301], bind_host[null], publish_host[null],
compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1],
receive_predictor[512kb->512kb]
[2015-04-01 11:25:11,411][DEBUG][discovery.zen.ping.unicast] [es_node_4_2]
using initial hosts [10.60.9.4:9300, 10.60.9.4:9301, 10.60.9.5:9300,
10.60.9.5:9301], with concurrent_connects [10]
[2015-04-01 11:25:11,414][DEBUG][discovery.zen ] [es_node_4_2]
using ping.timeout [3s], master_election.filter_client [true],
master_election.filter_data [false]
[2015-04-01 11:25:11,415][DEBUG][discovery.zen.elect ] [es_node_4_2]
using minimum_master_nodes [3]
[2015-04-01 11:25:11,416][DEBUG][discovery.zen.fd ] [es_node_4_2]
[master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-04-01 11:25:11,428][DEBUG][discovery.zen.fd ] [es_node_4_2]
[node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2015-04-01 11:25:11,444][DEBUG][monitor.jvm ] [es_node_4_1]
enabled [true], last_gc_enabled [false], interval [1s], gc_threshold
[{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000,
debugThreshold=2000}, default=GcThreshold{name='default',
warnThreshold=10000, infoThreshold=5000, debugThreshold=2000},
young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700,
debugThreshold=400}}]
[2015-04-01 11:25:11,461][DEBUG][monitor.jvm ] [es_node_4_2]
enabled [true], last_gc_enabled [false], interval [1s], gc_threshold
[{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000,
debugThreshold=2000}, default=GcThreshold{name='default',
warnThreshold=10000, infoThreshold=5000, debugThreshold=2000},
young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700,
debugThreshold=400}}]
[2015-04-01 11:25:11,960][DEBUG][monitor.os ] [es_node_4_1]
Using probe [org.elasticsearch.monitor.os.SigarOsProbe@4d8ef117] with
refresh_interval [1s]
[2015-04-01 11:25:11,971][DEBUG][monitor.process ] [es_node_4_1]
Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@52ed3b53]
with refresh_interval [1s]
[2015-04-01 11:25:11,974][DEBUG][monitor.os ] [es_node_4_2]
Using probe [org.elasticsearch.monitor.os.SigarOsProbe@4fe2fe5d] with
refresh_interval [1s]
[2015-04-01 11:25:11,976][DEBUG][monitor.jvm ] [es_node_4_1]
Using refresh_interval [1s]
[2015-04-01 11:25:11,976][DEBUG][monitor.network ] [es_node_4_1]
Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@2c18b492]
with refresh_interval [5s]
[2015-04-01 11:25:11,981][DEBUG][monitor.process ] [es_node_4_2]
Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@5230b601]
with refresh_interval [1s]
[2015-04-01 11:25:11,986][DEBUG][monitor.jvm ] [es_node_4_2]
Using refresh_interval [1s]
[2015-04-01 11:25:11,987][DEBUG][monitor.network ] [es_node_4_2]
Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@44f1b25e]
with refresh_interval [5s]
[2015-04-01 11:25:12,005][DEBUG][monitor.network ] [es_node_4_1]
net_info
host [CRXJ-MONITOR-1]
bond1 display_name [bond1]
address [/fe80:0:0:0:2e44:fdff:fe84:a8fe%7] [/192.168.129.4]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual
[false]
bond0 display_name [bond0]
address [/fe80:0:0:0:2e44:fdff:fe84:a8fc%6] [/10.60.9.4]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual
[false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual
[false]

[2015-04-01 11:25:12,005][DEBUG][monitor.network ] [es_node_4_2]
net_info
host [CRXJ-MONITOR-1]
bond1 display_name [bond1]
address [/fe80:0:0:0:2e44:fdff:fe84:a8fe%7] [/192.168.129.4]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual
[false]
bond0 display_name [bond0]
address [/fe80:0:0:0:2e44:fdff:fe84:a8fc%6] [/10.60.9.4]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual
[false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual
[false]

[2015-04-01 11:25:12,042][DEBUG][monitor.fs ] [es_node_4_1]
Using probe [org.elasticsearch.monitor.fs.SigarFsProbe@32552379] with
refresh_interval [1s]
[2015-04-01 11:25:12,047][DEBUG][monitor.fs ] [es_node_4_2]
Using probe [org.elasticsearch.monitor.fs.SigarFsProbe@be389b8] with
refresh_interval [1s]
[2015-04-01 11:25:12,395][DEBUG][indices.store ] [es_node_4_1]
using indices.store.throttle.type [none], with
index.store.throttle.max_bytes_per_sec [20mb]
[2015-04-01 11:25:12,404][DEBUG][indices.store ] [es_node_4_2]
using indices.store.throttle.type [none], with
index.store.throttle.max_bytes_per_sec [20mb]
[2015-04-01 11:25:12,410][DEBUG][script ] [es_node_4_1]
using script cache with max_size [500], expire [null]
[2015-04-01 11:25:12,416][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:12,417][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:12,417][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:12,419][DEBUG][script ] [es_node_4_2]
using script cache with max_size [500], expire [null]
[2015-04-01 11:25:12,422][DEBUG][gateway.local ] [es_node_4_1]
using initial_shards [quorum], list_timeout [30s]
[2015-04-01 11:25:12,425][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:12,426][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:12,427][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:12,431][DEBUG][gateway.local ] [es_node_4_2]
using initial_shards [quorum], list_timeout [30s]
[2015-04-01 11:25:12,438][DEBUG][indices.recovery ] [es_node_4_1]
using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size
[512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2015-04-01 11:25:12,448][DEBUG][indices.recovery ] [es_node_4_2]
using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size
[512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2015-04-01 11:25:12,583][DEBUG][http.netty ] [es_node_4_1]
using max_chunk_size[8kb], max_header_size[8kb],
max_initial_line_length[4kb], max_content_length[100mb],
receive_predictor[512kb->512kb]
[2015-04-01 11:25:12,589][DEBUG][indices.memory ] [es_node_4_1]
using index_buffer_size [2.6gb], with min_shard_index_buffer_size [4mb],
max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2015-04-01 11:25:12,590][DEBUG][indices.cache.filter ] [es_node_4_1]
using [node] weighted filter cache with size [20%], actual_size [1.5gb],
expire [null], clean_interval [1m]
[2015-04-01 11:25:12,591][DEBUG][indices.fielddata.cache ] [es_node_4_1]
using size [25%] [1.9gb], expire [null]
[2015-04-01 11:25:12,594][DEBUG][http.netty ] [es_node_4_2]
using max_chunk_size[8kb], max_header_size[8kb],
max_initial_line_length[4kb], max_content_length[100mb],
receive_predictor[512kb->512kb]
[2015-04-01 11:25:12,601][DEBUG][indices.memory ] [es_node_4_2]
using index_buffer_size [2.6gb], with min_shard_index_buffer_size [4mb],
max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2015-04-01 11:25:12,602][DEBUG][indices.cache.filter ] [es_node_4_2]
using [node] weighted filter cache with size [20%], actual_size [1.5gb],
expire [null], clean_interval [1m]
[2015-04-01 11:25:12,604][DEBUG][indices.fielddata.cache ] [es_node_4_2]
using size [25%] [1.9gb], expire [null]
[2015-04-01 11:25:12,609][DEBUG][gateway.local.state.meta ] [es_node_4_1]
using gateway.local.auto_import_dangled [YES], with
gateway.local.dangling_timeout [2h]
[2015-04-01 11:25:12,622][DEBUG][gateway.local.state.meta ] [es_node_4_2]
using gateway.local.auto_import_dangled [YES], with
gateway.local.dangling_timeout [2h]
[2015-04-01 11:25:12,801][DEBUG][gateway.local.state.meta ] [es_node_4_1]
took 191ms to load state
[2015-04-01 11:25:12,853][DEBUG][gateway.local.state.meta ] [es_node_4_2]
took 230ms to load state
[2015-04-01 11:25:13,112][DEBUG][gateway.local.state.shards] [es_node_4_2]
took 256ms to load started shards state
[2015-04-01 11:25:13,118][DEBUG][bulk.udp ] [es_node_4_2]
using enabled [false], host [null], port [9700-9800], bulk_actions [1000],
bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2015-04-01 11:25:13,126][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:13,127][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:13,127][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:13,129][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:13,129][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:13,129][DEBUG][cluster.routing.allocation.decider]
[es_node_4_2] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:13,154][INFO ][node ] [es_node_4_2]
initialized
[2015-04-01 11:25:13,154][INFO ][node ] [es_node_4_2]
starting ...
[2015-04-01 11:25:13,178][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2015-04-01 11:25:13,178][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2015-04-01 11:25:13,277][DEBUG][gateway.local.state.shards] [es_node_4_1]
took 473ms to load started shards state
[2015-04-01 11:25:13,283][DEBUG][bulk.udp ] [es_node_4_1]
using enabled [false], host [null], port [9700-9800], bulk_actions [1000],
bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2015-04-01 11:25:13,291][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:13,292][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:13,292][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:13,294][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2015-04-01 11:25:13,294][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2015-04-01 11:25:13,294][DEBUG][cluster.routing.allocation.decider]
[es_node_4_1] using [cluster_concurrent_rebalance] with [2]
[2015-04-01 11:25:13,318][INFO ][node ] [es_node_4_1]
initialized
[2015-04-01 11:25:13,318][INFO ][node ] [es_node_4_1]
starting ...
[2015-04-01 11:25:13,350][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2015-04-01 11:25:13,351][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2015-04-01 11:25:13,354][DEBUG][transport.netty ] [es_node_4_2]
Bound to address [/0:0:0:0:0:0:0:0:9301]
[2015-04-01 11:25:13,355][INFO ][transport ] [es_node_4_2]
bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address
{inet[/10.60.9.4:9301]}
[2015-04-01 11:25:13,420][DEBUG][transport.netty ] [es_node_4_2]
connected to node [[#zen_unicast_4#][CRXJ-MONITOR-1][inet[/10.60.9.5:9301]]]
[2015-04-01 11:25:13,420][DEBUG][transport.netty ] [es_node_4_2]
connected to node [[#zen_unicast_3#][CRXJ-MONITOR-1][inet[/10.60.9.5:9300]]]
[2015-04-01 11:25:13,445][DEBUG][transport.netty ] [es_node_4_2]
connected to node
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:13,543][DEBUG][transport.netty ] [es_node_4_1]
Bound to address [/0:0:0:0:0:0:0:0:9300]
[2015-04-01 11:25:13,544][INFO ][transport ] [es_node_4_1]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.60.9.4:9300]}
[2015-04-01 11:25:13,611][DEBUG][transport.netty ] [es_node_4_1]
connected to node [[#zen_unicast_3#][CRXJ-MONITOR-1][inet[/10.60.9.5:9300]]]
[2015-04-01 11:25:13,611][DEBUG][transport.netty ] [es_node_4_1]
connected to node [[#zen_unicast_4#][CRXJ-MONITOR-1][inet[/10.60.9.5:9301]]]
[2015-04-01 11:25:13,612][DEBUG][transport.netty ] [es_node_4_1]
connected to node
[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:13,611][DEBUG][transport.netty ] [es_node_4_1]
connected to node [[#zen_unicast_2#][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]]
[2015-04-01 11:25:14,873][DEBUG][transport.netty ] [es_node_4_2]
connected to node [[#zen_unicast_1#][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]]
[2015-04-01 11:25:16,388][DEBUG][transport.netty ] [es_node_4_2]
disconnected from [[#zen_unicast_4#][CRXJ-MONITOR-1][inet[/10.60.9.5:9301]]]
[2015-04-01 11:25:16,407][DEBUG][transport.netty ] [es_node_4_2]
disconnected from
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,409][DEBUG][transport.netty ] [es_node_4_2]
disconnected from [[#zen_unicast_1#][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]]
[2015-04-01 11:25:16,410][DEBUG][transport.netty ] [es_node_4_2]
disconnected from [[#zen_unicast_3#][CRXJ-MONITOR-1][inet[/10.60.9.5:9300]]]
[2015-04-01 11:25:16,411][DEBUG][discovery.zen ] [es_node_4_2]
filtered ping responses: (filter_client[true], filter_data[false])
--> target
[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}], master [null]
--> target
[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}], master [null]
--> target
[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}], master [null]
[2015-04-01 11:25:16,422][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-join (elected_as_master)]: execute
[2015-04-01 11:25:16,425][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [1], source [zen-disco-join
(elected_as_master)]
[2015-04-01 11:25:16,427][INFO ][cluster.service ] [es_node_4_2]
new_master
[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}, reason: zen-disco-join (elected_as_master)
[2015-04-01 11:25:16,436][DEBUG][transport.netty ] [es_node_4_2]
connected to node
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,437][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 1
[2015-04-01 11:25:16,437][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 1
[2015-04-01 11:25:16,441][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:16,442][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:16,442][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-join (elected_as_master)]: done applying updated
cluster_state (version: 1)
[2015-04-01 11:25:16,443][INFO ][discovery ] [es_node_4_2]
elasticsearch_log/CoY0ysSrTlqtSxJ6iS6wWg
[2015-04-01 11:25:16,443][DEBUG][gateway ] [es_node_4_2]
not recovering from gateway, nodes_size (data+master) [1] <
recover_after_nodes [4]
[2015-04-01 11:25:16,490][INFO ][http ] [es_node_4_2]
bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address
{inet[/10.60.9.4:9201]}
[2015-04-01 11:25:16,493][DEBUG][cluster.service ] [es_node_4_2]
processing [updating local node id]: execute
[2015-04-01 11:25:16,493][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [2], source [updating local node id]
[2015-04-01 11:25:16,493][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 2
[2015-04-01 11:25:16,496][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 2
[2015-04-01 11:25:16,497][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:16,497][DEBUG][gateway ] [es_node_4_2]
not recovering from gateway, nodes_size (data+master) [1] <
recover_after_nodes [4]
[2015-04-01 11:25:16,497][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:16,497][DEBUG][cluster.service ] [es_node_4_2]
processing [updating local node id]: done applying updated cluster_state
(version: 2)
[2015-04-01 11:25:16,497][INFO ][node ] [es_node_4_2]
started
[2015-04-01 11:25:16,520][DEBUG][transport.netty ] [es_node_4_2]
connected to node
[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,531][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:16,532][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [3], source [zen-disco-receive(join from
node[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:16,532][INFO ][cluster.service ] [es_node_4_2]
added
{[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2},}, reason: zen-disco-receive(join from
node[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])
[2015-04-01 11:25:16,532][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 3
[2015-04-01 11:25:16,570][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 3
[2015-04-01 11:25:16,570][DEBUG][cluster ] [es_node_4_2]
data node was added, retrieving new cluster info
[2015-04-01 11:25:16,572][DEBUG][gateway ] [es_node_4_2]
not recovering from gateway, nodes_size (data+master) [2] <
recover_after_nodes [4]
[2015-04-01 11:25:16,572][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:16,572][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 3)
[2015-04-01 11:25:16,572][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:16,581][DEBUG][transport.netty ] [es_node_4_1]
disconnected from [[#zen_unicast_4#][CRXJ-MONITOR-1][inet[/10.60.9.5:9301]]]
[2015-04-01 11:25:16,591][ERROR][cluster ] [es_node_4_2]
Failed to execute IndicesStatsAction for ClusterInfoUpdateJob
org.elasticsearch.cluster.block.ClusterBlockException: blocked by:
[SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at
org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:138)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:89)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:53)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.(TransportBroadcastOperationAction.java:124)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:75)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:46)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
at
org.elasticsearch.cluster.InternalClusterInfoService$ClusterInfoUpdateJob.run(InternalClusterInfoService.java:293)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
[2015-04-01 11:25:16,600][DEBUG][transport.netty ] [es_node_4_1]
disconnected from
[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,601][DEBUG][transport.netty ] [es_node_4_1]
disconnected from [[#zen_unicast_3#][CRXJ-MONITOR-1][inet[/10.60.9.5:9300]]]
[2015-04-01 11:25:16,602][DEBUG][transport.netty ] [es_node_4_1]
disconnected from [[#zen_unicast_2#][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]]
[2015-04-01 11:25:16,603][DEBUG][discovery.zen ] [es_node_4_1]
filtered ping responses: (filter_client[true], filter_data[false])
--> target
[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}], master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
--> target
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}], master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
--> target
[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}], master [null]
[2015-04-01 11:25:16,615][DEBUG][transport.netty ] [es_node_4_1]
connected to node
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,628][DEBUG][transport.netty ] [es_node_4_2]
connected to node
[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,636][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:16,636][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [4], source [zen-disco-receive(join from
node[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:16,636][INFO ][cluster.service ] [es_node_4_2]
added
{[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2},}, reason: zen-disco-receive(join from
node[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])
[2015-04-01 11:25:16,637][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 4
[2015-04-01 11:25:16,638][DEBUG][discovery.zen.fd ] [es_node_4_1]
[master] starting fault detection against master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}], reason [initial_join]
[2015-04-01 11:25:16,646][DEBUG][discovery.zen.publish ] [es_node_4_1]
received cluster state version 4
[2015-04-01 11:25:16,648][DEBUG][discovery.zen ] [es_node_4_1]
received cluster state from
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}] which is also master but with cluster name
[Cluster [elasticsearch_log]]
[2015-04-01 11:25:16,656][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:16,658][DEBUG][cluster.service ] [es_node_4_1]
got first state from fresh master [CoY0ysSrTlqtSxJ6iS6wWg]
[2015-04-01 11:25:16,658][DEBUG][cluster.service ] [es_node_4_1]
cluster state updated, version [4], source [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:16,660][INFO ][cluster.service ] [es_node_4_1]
detected_master
[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}, added
{[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2},[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2},}, reason: zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])
[2015-04-01 11:25:16,671][DEBUG][transport.netty ] [es_node_4_1]
connected to node
[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,675][DEBUG][transport.netty ] [es_node_4_1]
connected to node
[[es_node_5_2][UhG2Mtf4SVOgAZLpm7GNuQ][CRXJ-MONITOR-2][inet[/10.60.9.5:9301]]{rack_id=rack_node_5,
max_local_storage_nodes=2}]
[2015-04-01 11:25:16,675][DEBUG][cluster.service ] [es_node_4_1]
set local cluster state to version 4
[2015-04-01 11:25:16,676][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 4)
[2015-04-01 11:25:16,676][INFO ][discovery ] [es_node_4_1]
elasticsearch_log/w58vSDleSS2geuwMrChlxg
[2015-04-01 11:25:16,677][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 4
[2015-04-01 11:25:16,678][DEBUG][cluster ] [es_node_4_2]
data node was added, retrieving new cluster info
[2015-04-01 11:25:16,678][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:16,678][DEBUG][gateway ] [es_node_4_2]
not recovering from gateway, nodes_size (data+master) [3] <
recover_after_nodes [4]
[2015-04-01 11:25:16,678][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:16,678][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_4_1][w58vSDleSS2geuwMrChlxg][CRXJ-MONITOR-1][inet[/10.60.9.4:9300]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 4)
[2015-04-01 11:25:16,679][ERROR][cluster ] [es_node_4_2]
Failed to execute IndicesStatsAction for ClusterInfoUpdateJob
org.elasticsearch.cluster.block.ClusterBlockException: blocked by:
[SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at
org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:138)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:89)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:53)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.(TransportBroadcastOperationAction.java:124)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:75)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:46)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
at
org.elasticsearch.cluster.InternalClusterInfoService$ClusterInfoUpdateJob.run(InternalClusterInfoService.java:293)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
[2015-04-01 11:25:16,723][INFO ][http ] [es_node_4_1]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/10.60.9.4:9200]}
[2015-04-01 11:25:16,725][DEBUG][cluster.service ] [es_node_4_1]
processing [updating local node id]: execute
[2015-04-01 11:25:16,726][DEBUG][cluster.service ] [es_node_4_1]
cluster state updated, version [4], source [updating local node id]
[2015-04-01 11:25:16,726][DEBUG][cluster.service ] [es_node_4_1]
set local cluster state to version 4
[2015-04-01 11:25:16,726][DEBUG][cluster.service ] [es_node_4_1]
processing [updating local node id]: done applying updated cluster_state
(version: 4)
[2015-04-01 11:25:16,726][INFO ][node ] [es_node_4_1]
started
[2015-04-01 11:25:18,312][DEBUG][transport.netty ] [es_node_4_2]
connected to node
[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}]
[2015-04-01 11:25:18,318][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:18,319][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [5], source [zen-disco-receive(join from
node[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:18,319][INFO ][cluster.service ] [es_node_4_2]
added
{[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2},}, reason: zen-disco-receive(join from
node[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])
[2015-04-01 11:25:18,320][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 5
[2015-04-01 11:25:18,322][DEBUG][discovery.zen.publish ] [es_node_4_1]
received cluster state version 5
[2015-04-01 11:25:18,323][DEBUG][discovery.zen ] [es_node_4_1]
received cluster state from
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}] which is also master but with cluster name
[Cluster [elasticsearch_log]]
[2015-04-01 11:25:18,323][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:18,324][DEBUG][cluster.service ] [es_node_4_1]
cluster state updated, version [5], source [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:18,324][INFO ][cluster.service ] [es_node_4_1]
added
{[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2},}, reason: zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])
[2015-04-01 11:25:18,329][DEBUG][transport.netty ] [es_node_4_1]
connected to node
[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}]
[2015-04-01 11:25:18,329][DEBUG][cluster.service ] [es_node_4_1]
set local cluster state to version 5
[2015-04-01 11:25:18,329][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 5)
[2015-04-01 11:25:18,356][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 5
[2015-04-01 11:25:18,356][DEBUG][cluster ] [es_node_4_2]
data node was added, retrieving new cluster info
[2015-04-01 11:25:18,357][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:18,357][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:18,359][ERROR][cluster ] [es_node_4_2]
Failed to execute IndicesStatsAction for ClusterInfoUpdateJob
org.elasticsearch.cluster.block.ClusterBlockException: blocked by:
[SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at
org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:138)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:89)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:53)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.(TransportBroadcastOperationAction.java:124)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:75)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction.doExecute(TransportBroadcastOperationAction.java:46)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
at
org.elasticsearch.cluster.InternalClusterInfoService$ClusterInfoUpdateJob.run(InternalClusterInfoService.java:293)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
[2015-04-01 11:25:18,360][DEBUG][cluster.service ] [es_node_4_2]
processing [zen-disco-receive(join from
node[[es_node_5_1][nwkFlS5uQGSbdYjzuWKUrg][CRXJ-MONITOR-2][inet[/10.60.9.5:9300]]{rack_id=rack_node_5,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 5)
[2015-04-01 11:25:18,650][DEBUG][cluster.service ] [es_node_4_2]
processing [local-gateway-elected-state]: execute
[2015-04-01 11:25:18,676][DEBUG][cluster.service ] [es_node_4_2]
cluster state updated, version [6], source [local-gateway-elected-state]
[2015-04-01 11:25:18,677][DEBUG][cluster.service ] [es_node_4_2]
publishing cluster state version 6
[2015-04-01 11:25:18,713][DEBUG][discovery.zen.publish ] [es_node_4_1]
received cluster state version 6
[2015-04-01 11:25:18,713][DEBUG][discovery.zen ] [es_node_4_1]
received cluster state from
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}] which is also master but with cluster name
[Cluster [elasticsearch_log]]
[2015-04-01 11:25:18,714][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: execute
[2015-04-01 11:25:18,718][DEBUG][cluster.service ] [es_node_4_1]
cluster state updated, version [6], source [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]
[2015-04-01 11:25:18,718][DEBUG][cluster.service ] [es_node_4_1]
set local cluster state to version 6
[2015-04-01 11:25:19,247][DEBUG][cluster.service ] [es_node_4_1]
processing [zen-disco-receive(from master
[[es_node_4_2][CoY0ysSrTlqtSxJ6iS6wWg][CRXJ-MONITOR-1][inet[/10.60.9.4:9301]]{rack_id=rack_node_4,
max_local_storage_nodes=2}])]: done applying updated cluster_state
(version: 6)
[2015-04-01 11:25:19,302][DEBUG][cluster.service ] [es_node_4_2]
set local cluster state to version 6
[2015-04-01 11:25:19,311][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: execute
[2015-04-01 11:25:19,312][DEBUG][river.cluster ] [es_node_4_2]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-04-01 11:25:19,579][INFO ][gateway ] [es_node_4_2]
recovered [84] indices into cluster_state
[2015-04-01 11:25:19,579][DEBUG][cluster.service ] [es_node_4_2]
processing [local-gateway-elected-state]: done applying updated
cluster_state (version: 6)
[2015-04-01 11:25:26,440][DEBUG][cluster.service ] [es_node_4_2]
processing [routing-table-updater]: execute
[2015-04-01 11:25:26,441][DEBUG][cluster.service ] [es_node_4_2]
processing [routing-table-updater]: no change in cluster_state

elasticsearch.yml
index.merge.policy.max_merged_segment: 1gb
index.merge.policy.segments_per_tier: 4
index.merge.policy.max_merge_at_once: 4
index.merge.policy.max_merge_at_once_explicit: 4
index.merge.scheduler.max_thread_count: 1
indices.memory.index_buffer_size: 33%
indices.store.throttle.type: none
threadpool.merge.type: fixed
threadpool.merge.size: 4
threadpool.merge.queue_size: 32
###bulk线程池类型fixed-固定
threadpool.bulk.type: fixed
####bulk线程最大值
threadpool.bulk.size: 20
####bulk队列最大值
threadpool.bulk.queue_size: 32
bootstrap.mlockall: true
node.max_local_storage_nodes: 2
cluster.name: elasticsearch_log

http.port: 9201
transport.tcp.port: 9301
node.name: es_node_4_2
path.data: /home/nat/esdata.d
node.rack_id: rack_node_4
cluster.routing.allocation.awareness.attributes: rack_id
#######################################
###zen 避免分裂
discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
["10.60.9.4:9300","10.60.9.4:9301","10.60.9.5:9300","10.60.9.5:9301"]
###字段数据缓存
indices.fielddata.cache.size: 25%
###shards 分配
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 97
cluster.routing.allocation.disk.watermark.high: 99
###开始恢复的最小节点数
gateway.recover_after_nodes: 4
###索引存储类型
index.store.type: niofs
###数据压缩
index.store.compress.stored: true
index.store.compress.tv: true
######################################

I tried various ways:open 9300-9400 port (tcp), close firewall, recovery
hosts, etc.But they are useless.

Anybody have any suggestions about this?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fd4ae959-671b-445c-bed8-64e5599b6245%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

1 Like