Cannot update minimum_master_nodes

Hi there!

When sending an update settings request to /_cluster/settings with minimum_master_nodes I get the following log entry on my master nodes. It also does not apply the settings.

[2015-05-16 21:49:54,425][WARN ][action.admin.cluster.settings] [i-148083f2] ignoring persistent setting [discovery.zen.minimum_master_nodes], not dynamically updateable

I've searched all over mailing lists and asked on IRC, but I really cannot tell why this doesn't work. I'm using 1.5.1 on ubuntu and I am using the EC2 discovery plugin, if that matters.

Kind regards,
Kristian

It's not something you can change persistently on the fly, which is exactly what the error says.

Try passing it as a transient setting.

Hi Mark,

It also gives the same error if I try with transient. Furthermore, the docs under "Important Configuration Changes" [1] does indeed recommend applying this as a persistent setting, which I believe is the only thing that makes sense anyway, because you definitely want to keep this setting after a cluster reboot. If you don't you would most likely lose all your data.

[1] https://www.elastic.co/guide/en/elasticsearch/guide/master/_important_configuration_changes.html#_minimum_master_nodes

Kind regards,

Kristian

If you apply that to your elasticsearch.yml it will work.

Can you paste the call(s) to ES you are trying?

Certainly!

$ curl -XPUT localhost:9200/_cluster/settings -d '{
     "persistent" : {
         "discovery.zen.minimum_master_nodes" : 2
     }
 }'

Which returns:

{"acknowledged":true,"persistent":{},"transient":{}}

For the record, this is my setup:

$ curl localhost:9200/_cat/nodes
localhost 127.0.0.1 2  8 0.00 - * i-c7849420
localhost 127.0.0.1 1  7 0.00 - m i-148083f2
localhost 127.0.0.1 5 10 0.00 - - i-57ddceb0
localhost 127.0.0.1 1  2 0.00 d - i-52898ab4
localhost 127.0.0.1 2  7 0.00 - m i-ce8a8928

As it indicates, I have three dedicated master nodes, one client node as well as one data node.

EDIT: Updating elasticsearch.yml on a master node and restarting doesn't have any effect. Didn't try changing it on all nodes and restarting the entire cluster, though.

Turns out this was a bug in the AWS plugin.