Shutdown master means breakdown the cluster's service?

I have one cluster with two nodes. I used the curl command to shutdown the
master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace the
failed one. But it seems things does not like what I think.

--

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa asoqa51@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to shutdown the
master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace the
failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ] [Quasimodo]
master_left [[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]],
reason [shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ] [Quasimodo] not
enough master nodes after master left (reason = shut_down), current nodes:
{[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ] [Quasimodo]
removed {[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],},
reason: zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa <aso...@gmail.com <javascript:>> a
écrit :

I have one cluster with two nodes. I used the curl command to shutdown
the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace
the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

You, probably, have discovery.zen.minimum_master_nodes setting set to 2.
So, when the cluster losses the second master eligible node it doesn't
elect the only remaining node as a master to avoid split brain. You can
either remove this setting or add the third master eligible node to your
cluster.

On Tuesday, January 8, 2013 8:46:34 PM UTC-5, asoqa wrote:

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ] [Quasimodo]
master_left [[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]],
reason [shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ] [Quasimodo]
not enough master nodes after master left (reason = shut_down), current
nodes: {[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ] [Quasimodo]
removed {[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],},
reason: zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa aso...@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to shutdown
the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace
the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

I didn't change the default setting. After reading your post, I change
the discovery.zen.minimum_master_nodes to 1 in config/elasticsearch.yml,
and restart elasticsearch.
It's strange I find es automatically reset
discovery.zen.minimum_master_nodes to 2. Here is the log:
[2013-01-09 10:31:06,107][INFO ][node ] [Wilson, Sam]
{0.19.11-SNAPSHOT}[32214]: started
[2013-01-09 10:31:10,396][INFO ][cluster.service ] [Wilson, Sam]
detected_master [Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10
.232.42.205:9300]], added
{[Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]],}, reason:
zen-disco-receive(from master [[W
arhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]]])
[2013-01-09 10:31:10,425][INFO ][discovery.zen.elect ] [Wilson, Sam]
updating [discovery.zen.minimum_master_nodes] from [1] to
[2]

在 2013年1月9日星期三UTC+8上午9时55分50秒,Igor Motov写道:

You, probably, have discovery.zen.minimum_master_nodes setting set to 2.
So, when the cluster losses the second master eligible node it doesn't
elect the only remaining node as a master to avoid split brain. You can
either remove this setting or add the third master eligible node to your
cluster.

On Tuesday, January 8, 2013 8:46:34 PM UTC-5, asoqa wrote:

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ] [Quasimodo]
master_left [[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]],
reason [shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ] [Quasimodo]
not enough master nodes after master left (reason = shut_down), current
nodes: {[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ] [Quasimodo]
removed {[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],},
reason: zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa aso...@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to shutdown
the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace
the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Did I miss something or do something wrong?

在 2013年1月9日星期三UTC+8上午10时35分43秒,asoqa写道:

I didn't change the default setting. After reading your post, I change
the discovery.zen.minimum_master_nodes to 1 in config/elasticsearch.yml,
and restart elasticsearch.
It's strange I find es automatically reset
discovery.zen.minimum_master_nodes to 2. Here is the log:
[2013-01-09 10:31:06,107][INFO ][node ] [Wilson, Sam]
{0.19.11-SNAPSHOT}[32214]: started
[2013-01-09 10:31:10,396][INFO ][cluster.service ] [Wilson, Sam]
detected_master [Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10
.232.42.205:9300]], added
{[Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]],}, reason:
zen-disco-receive(from master [[W
arhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]]])
[2013-01-09 10:31:10,425][INFO ][discovery.zen.elect ] [Wilson, Sam]
updating [discovery.zen.minimum_master_nodes] from [1] to
[2]

在 2013年1月9日星期三UTC+8上午9时55分50秒,Igor Motov写道:

You, probably, have discovery.zen.minimum_master_nodes setting set to 2.
So, when the cluster losses the second master eligible node it doesn't
elect the only remaining node as a master to avoid split brain. You can
either remove this setting or add the third master eligible node to your
cluster.

On Tuesday, January 8, 2013 8:46:34 PM UTC-5, asoqa wrote:

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ] [Quasimodo]
master_left [[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]],
reason [shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ] [Quasimodo]
not enough master nodes after master left (reason = shut_down), current
nodes: {[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ] [Quasimodo]
removed {[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],},
reason: zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa aso...@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to
shutdown the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to replace
the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

This setting was probably updated using Update Cluster Setting APIhttp://www.elasticsearch.org/guide/reference/api/admin-cluster-update-settings.html. What
do you get when you run this command:

curl localhost:9200/_cluster/settings

If it says something like

{"persistent":{"discovery.zen.minimum_master_nodes":"2"},"transient":{}}

Run the following command to set number of master nodes to 1:

curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"discovery.zen.minimum_master_nodes" : 1
}
}'

On Wednesday, January 9, 2013 4:53:16 AM UTC-5, asoqa wrote:

Did I miss something or do something wrong?

在 2013年1月9日星期三UTC+8上午10时35分43秒,asoqa写道:

I didn't change the default setting. After reading your post, I change
the discovery.zen.minimum_master_nodes to 1 in config/elasticsearch.yml,
and restart elasticsearch.
It's strange I find es automatically reset
discovery.zen.minimum_master_nodes to 2. Here is the log:
[2013-01-09 10:31:06,107][INFO ][node ] [Wilson, Sam]
{0.19.11-SNAPSHOT}[32214]: started
[2013-01-09 10:31:10,396][INFO ][cluster.service ] [Wilson, Sam]
detected_master [Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10
.232.42.205:9300]], added
{[Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]],}, reason:
zen-disco-receive(from master [[W
arhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]]])
[2013-01-09 10:31:10,425][INFO ][discovery.zen.elect ] [Wilson, Sam]
updating [discovery.zen.minimum_master_nodes] from [1] to
[2]

在 2013年1月9日星期三UTC+8上午9时55分50秒,Igor Motov写道:

You, probably, have discovery.zen.minimum_master_nodes setting set to 2.
So, when the cluster losses the second master eligible node it doesn't
elect the only remaining node as a master to avoid split brain. You can
either remove this setting or add the third master eligible node to your
cluster.

On Tuesday, January 8, 2013 8:46:34 PM UTC-5, asoqa wrote:

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ] [Quasimodo]
master_left [[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]],
reason [shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ] [Quasimodo]
not enough master nodes after master left (reason = shut_down), current
nodes: {[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ] [Quasimodo]
removed {[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],},
reason: zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa aso...@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to
shutdown the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to
replace the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

thank you for you patience. Problem was solved!

在 2013年1月9日星期三UTC+8下午7时37分06秒,Igor Motov写道:

This setting was probably updated using Update Cluster Setting APIhttp://www.elasticsearch.org/guide/reference/api/admin-cluster-update-settings.html. What
do you get when you run this command:

curl localhost:9200/_cluster/settings

If it says something like

{"persistent":{"discovery.zen.minimum_master_nodes":"2"},"transient":{}}

Run the following command to set number of master nodes to 1:

curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"discovery.zen.minimum_master_nodes" : 1
}
}'

On Wednesday, January 9, 2013 4:53:16 AM UTC-5, asoqa wrote:

Did I miss something or do something wrong?

在 2013年1月9日星期三UTC+8上午10时35分43秒,asoqa写道:

I didn't change the default setting. After reading your post, I change
the discovery.zen.minimum_master_nodes to 1 in config/elasticsearch.yml,
and restart elasticsearch.
It's strange I find es automatically reset
discovery.zen.minimum_master_nodes to 2. Here is the log:
[2013-01-09 10:31:06,107][INFO ][node ] [Wilson,
Sam] {0.19.11-SNAPSHOT}[32214]: started
[2013-01-09 10:31:10,396][INFO ][cluster.service ] [Wilson,
Sam] detected_master [Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10
.232.42.205:9300]], added
{[Warhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]],}, reason:
zen-disco-receive(from master [[W
arhawk][h6xkYH-JTT2_g8pSpLKTkA][inet[/10.232.42.205:9300]]])
[2013-01-09 10:31:10,425][INFO ][discovery.zen.elect ] [Wilson,
Sam] updating [discovery.zen.minimum_master_nodes] from [1] to
[2]

在 2013年1月9日星期三UTC+8上午9时55分50秒,Igor Motov写道:

You, probably, have discovery.zen.minimum_master_nodes setting set to
2. So, when the cluster losses the second master eligible node it doesn't
elect the only remaining node as a master to avoid split brain. You can
either remove this setting or add the third master eligible node to your
cluster.

On Tuesday, January 8, 2013 8:46:34 PM UTC-5, asoqa wrote:

the left node's log:
[2013-01-09 09:39:30,517][INFO ][discovery.zen ]
[Quasimodo] master_left
[[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]]], reason
[shut_down]
[2013-01-09 09:39:30,518][WARN ][discovery.zen ]
[Quasimodo] not enough master nodes after master left (reason = shut_down),
current nodes:
{[Quasimodo][ts0Kq_c2RoCFwa-UKFI8Tg][inet[/10.232.42.204:9300]],}
[2013-01-09 09:39:30,518][INFO ][cluster.service ]
[Quasimodo] removed
{[Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]],}, reason:
zen-disco-master_failed
([Plague][_6-qEiuHQ76mjwFdTDF9eQ][inet[/10.232.42.205:9300]])

the master node's log
[2013-01-09 09:39:27,365][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: requested, shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]] in [1s]
[2013-01-09 09:39:28,370][INFO ][action.admin.cluster.node.shutdown]
[Plague] shutting down in [200ms]
[2013-01-09 09:39:28,372][INFO ][action.admin.cluster.node.shutdown]
[Plague] [partial_cluster_shutdown]: done shutting down
[[_6-qEiuHQ76mjwFdTDF9eQ]]
[2013-01-09 09:39:28,572][INFO ][action.admin.cluster.node.shutdown]
[Plague] initiating requested shutdown (using service)
[2013-01-09 09:39:30,345][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopping ...
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: stopped
[2013-01-09 09:39:30,531][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closing ...
[2013-01-09 09:39:30,539][INFO ][node ] [Plague]
{0.19.11-SNAPSHOT}[16001]: closed

And I can not search the index, e.g. curl -XGET
localhost:9200/twitter/_search

在 2013年1月8日星期二UTC+8下午5时15分19秒,David Pilato写道:

Why do you think that a new master has not been elected?
Can you share your other nodes logs?

Le 8 janvier 2013 à 09:59, asoqa aso...@gmail.com a écrit :

I have one cluster with two nodes. I used the curl command to
shutdown the master node:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

I thought es should automatically choose a new master node to
replace the failed one. But it seems things does not like what I think.

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--