Shards got deleted after adding/removing index.number_of_replicas : 2

Hi All

I had ES cluster with 3 (data+master) & 1 co-ordinator node. I wanted set the default replication to 2, hence i following the below steps.

The ES cluster was fine and there were more than 100 shards and working well since 3-4 weeks.

  1. Logged into all 4 ES nodes & killed the elastic daemon
  2. Added the below lines to elasticsearch.yml in all 4 nodes
    index.number_of_replicas : 2
  3. Restarted the ES node..
  4. ES nodes didnt start..looking into the log and found information as specified below
  5. Removed the index.number_of_replicas : 2 from yml file
  6. restarted all the ES nodes..
  7. Cluster is up and running..

The min master settings:
discovery.zen.minimum_master_nodes: 3

My Question here is :
when I issue /_cat/indices?v , i see only .kibana index and the 100+ olds ones got deleted. Why the shards got deleted?

Error in log file when ES started with index.number_of_replicas : 2 in config file..

[2017-03-31T14:38:54,798][WARN ][o.e.c.s.SettingsModule ] [dev_es_node_3a]


Found index level settings on node level configuration.

Since elasticsearch 5.x index level settings can NOT be set on the nodes
configuration like the elasticsearch.yaml, in system properties or command line
arguments.In order to upgrade all indices the settings must be updated via the
/${index}/_settings API. Unless all settings are dynamic all indices must be closed
in order to apply the upgradeIndices created in the future should use index templates
to set default values.

Please ensure all required values are updated on all indices by executing:

curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_replicas" : "2"
}'


Is there anything else that you changed in the config?

No.. only 1 line ..
index.number_of_replicas : 2

That's highly unusual.

What's _cat/indices show?
Can you see directories for the original indices on the filesystem?

If you have 3 master eligible nodes, this should be set to 2. I you have it set to 3 the cluster will not be able to elect a master once one of the master eligible nodes go offline, which will cause problems.

But with the same config, there were so many index & shards were created... if one goes down, as per the documentation the write will fail.. but why the shards got deleted when the all masters are back?

There is no brain split scenario here...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.