Cluster Health: Red, Unassigned Shards

Here is my setup.

All servers are running "Ubuntu 14.04"

n1 - 12gb ram, 2 cpu, 1tb - (x) master - ( ) data
n2 - 12gb ram, 2 cpu, 1tb - ( ) master - (x) data
n3 - 12gb ram, 2 cpu, 500gb - ( ) master - (x) data
n4 - 12gb ram, 2 cpu, 500gb - ( ) master - (x) data
Logstash/Kibana - 16gb ram, 8 cpu, 6tb

New to setting up a cluster compared to just a single test machine. After setting my cluster to have node 1 as the master and nodes 2,3,4 as master eligleble/data nodes. The cluster health went Yellow, tried restarting and it went to red.
I am fairly certain it has to do with replicas not being set right. The guide I used said to set in elasticsearch.yml:

index.number_of_replicas: 3

Where 3 was calculated by (# of nodes / 2 + 1)

Is this correct? When I do a cluster health I don't see any replicas at all.

{
"cluster_name" : "SASD",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 10,
"active_shards" : 29,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 13,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 69.04761904761905
}

My index is as follows:

health status index pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana 1 1 7 1 83.1kb 41.5kb
red open firesight-2016.08.19 5 3 145 0 1.6mb 574.7kb
yellow open firesight-2016.08.18 5 3 478 0 2mb 691.2kb

I have just one firewall sending syslog to Logstash that is creating a new index every day called firesight-{date}.

Question 1
Now that I changed one to a master would this affect how many replicas I need, from what I read it shouldn't matter at all.

Question 2:
Now that one node is set to master, do I only need to make configuration changes to that node? Will it get pushed to the other nodes, or if I need to change something in the elasticsearch.yml file do I need to do it on all nodes?

Thanks

It is usually the setting for minimum_master_nodes that is calculated this way. You should make sure this is set on all nodes in order to avoid split brain scenarios. There is no such required setting for number of replicas, as that typically varies depending on use case. If you only have 3 data nodes, you should set the number of replicas no higher than 2, as that will cause all nodes to hold a copy of each shard.

Okay. Thanks for clarifying the setting. I have set minimum_master_nodes to 3 across all 4 nodes in the yml. file. It is still showing RED even after full reboot of all four nodes.

Shouldn't

"number_of_replicas"

show when I do a cluster health?

No, cluster health doesn't report replicas at all.

Use the _cat API for that.

Thanks warkolm.

That API lead me to the root cause. During my initial startup somehow the first two days of indexing I had something screwed up that caused some shards to become unassigned.

firesight-2016.08.19 0 r UNASSIGNED

after deleting the two indexes that contained "unassigned" shards everything went back to Green.

Thanks again.

You could have just removed the replicas :slight_smile: