Forced Shardawareness question


I am testing the forced shard awareness configuration following this section

Please see below my setup:

  • 3 dedicated master node
  • 4 dedicated data node
  • X-pack enabled
  • elasticsearch version 6.1

Master configuration:

  ## Shard Awareness ##

  "cluster.routing.allocation.awareness.force.%{location}.values": OVHSG,OVHRB
  cluster.routing.allocation.awareness.attributes: "%{location}

Data configuration:

  • On 2 data nodes
  "node.attr.%{location}": 'OVHSG'
  • On the remaining data nodes:
  "node.attr.%{location}": 'OVHRB'

When I setup this configuration, all the existing indexes are correctly reallocated on the correct zones 'OVHSG' and 'OVHDB', but all new timebased indexes are not included.
Please see below:
For the replica zone:

For the primary zone:

Maybe I miss understand the forced awareness meaning but how can I forced than new indexes are correctly allocated to the correct zone?

This is the mechanism we use when implementing hot/warm architecture, so the example in this blog post may help. I suspect you may have missed updating the index template used for new indices with the appropriate settings.

Hello Christian, thank you for your quick reply. I will work on it and give you my feedback.


Unfortunatelly I don't succeed to make it work.
My case is not exactly like the hot/warm architecture because I want to split permanently each indexes into 2 zones at the index creation.

Why do you have this in your config? What is it you are trying to achieve?


%{location} is puppet variable, it resolves the cluster datacenter location in my case it will be "ovh".
What I want is to test the forced shard allocation on my cluster, the objective is to control which nodes will hold a specific type of shards.

I changed my filebeat template with the following lines:

"filebeat": {
"order": 1,
"index_patterns": [
"settings": {
"index": {
"routing": {
"allocation": {
"require": {
"ovh": "OVHRB,OVHSG"

The result is cluster in red state and all new logs are rejeted:

[2018-02-06T11:08:14,095][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2018.02.06][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2018.02.06][2]] containing [index {[filebeat-2018.02.06][doc][WP-VamEBjUhj3xZVcHKp], source[{"@timestamp":"2018-02-06T10:00:09.951Z","offset":63643,"@version":"1","beat":{"name":"frovhlogstash01.talentsoft.local","hostname":"frovhlogstash01.talentsoft.local","version":"6.0.0"},"host":"frovhlogstash01.talentsoft.local","prospector":{"type":"log"},"source":"/var/log/auth.log","message":"Feb 6 11:00:01 frovhlogstash01 CRON[1830]: pam_unix(cron:session): session opened for user root by (uid=0)","fields":{"type":"authlog"},"tags":["beats_input_codec_plain_applied"]}]}]]"})
[2018-02-06T11:08:14,096][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2018.02.06][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2018.02.06][1]] containing [index {[filebeat-2018.02.06][doc][Wf-VamEBjUhj3xZVcHKp], source[{"@timestamp":"2018-02-06T10:00:09.951Z","offset":63740,"@version":"1","beat":{"name":"frovhlogstash01.talentsoft.local","hostname":"frovhlogstash01.talentsoft.local","version":"6.0.0"},"host":"frovhlogstash01.talentsoft.local","prospector":{"type":"log"},"source":"/var/log/auth.log","message":"Feb 6 11:00:03 frovhlogstash01 CRON[1830]: pam_unix(cron:session): session closed for user root","fields":{"type":"authlog"},"tags":["beats_input_codec_plain_applied"]}]}]]"})
[2018-02-06T11:08:14,096][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2018.02.06][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2018.02.06][3]] containing [index {[filebeat-2018.02.06][doc][Wv-VamEBjUhj3xZVcHKp], source[{"@timestamp":"2018-02-06T10:00:09.952Z","offset":160446,"@version":"1","beat":{"name":"frovhlogstash01.talentsoft.local","hostname":"frovhlogstash01.talentsoft.local","version":"6.0.0"},"host":"frovhlogstash01.talentsoft.local","prospector":{"type":"log"},"source":"/var/log/syslog","message":"Feb 6 11:00:01 frovhlogstash01 CRON[1831]: (root) CMD (puppet facts --render-as yaml |sed 's#!ruby/object:Puppet::Node::Facts##g' >/etc/puppetlabs/mcollective/facts.yaml

On the master: I have deleted the index filebeat-2018-02-06 and create it back with the new template settings

[2018-02-06T11:07:06,784][INFO ][o.e.c.m.MetaDataDeleteIndexService] [] [filebeat-2018.02.06/_GPlBhMVRR6XHvfwMtQCXA] deleting index
[2018-02-06T11:07:09,085][INFO ][o.e.c.m.MetaDataCreateIndexService] [] [filebeat-2018.02.06] creating index, cause [api], templates [filebeat], shards [5]/[1], mappings [doc]
[2018-02-06T11:07:09,208][INFO ][o.e.c.r.a.AllocationService] [] Cluster health status changed from [YELLOW] to [RED] (reason: [index [filebeat-2018.02.06] created]).

When I changed the config to the line below, it's works but only for one type of shard. I can't set 2 attributes to the routing allocation, so how can the index templates shoud be configured?

"": "OVHRB"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.