Red status elastic

From 1 week I have a problem with elasticsearch. Cluster status is red and I don't know how to return to green state

curl -XGET 'localhost:9200/_cluster/health?pretty'
"cluster_name" : "clusterelastic,
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 10,
"number_of_data_nodes" : 7,
"active_primary_shards" : 541,
"active_shards" : 935,
"relocating_shards" : 2,
"initializing_shards" : 3,
"unassigned_shards" : 146,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 1,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 86.25461254612546

curl -XGET 'localhost:9200/_cat/nodes?v'
host heap.percent ram.percent load node.role master name
1 12 91 9.42 d - nod1
2 3 87 0.00 - - kibana
3 35 98 0.01 d - nod2
4 0 4 0.00 d m nod3
5 12 97 1.14 d - nod4
6 38 99 0.81 d - nod5
7 15 93 0.00 - m nod6
8 21 91 3.14 d - nod7
9 79 96 0.22 - * nod8
10 61 96 4.61 d m nod9

shards disk.indices disk.used disk.avail disk.total disk.percent node
98 298.7gb 597.9gb 164.2gb 762.1gb 78 nod1
148 375.8gb 456.2gb 862.4gb 1.2tb 34 nod2
6 22.4gb 431.5gb 330.6gb 762.1gb 56 nod3
195 453.7gb 493.3gb 251.3gb 744.7gb 66 nod4
144 347.1gb 386.5gb 383.2gb 769.8gb 50 nod5
152 366.3gb 406gb 363.8gb 769.8gb 52 nod9
195 466.9gb 521.7gb 522.4gb 1tb 49 nod7
146 UNASSIGNED

Which version of Elasticsearch are you using? Are all nodes on exactly the same version of Elasticsearch? Do you have enough disk space available on all data nodes (<85% utilisation)?

Version is
"number" : "2.4.2",
"lucene_version" : "5.5.2"

and I have available space on all data nodes.

I had to stop elasticsearch service on master node and restart service. I don't know what appened but now work normally.

How many master eligible nodes do you have?

curl -XGET 'localhost:9200/_cat/nodes?v'
host heap.percent ram.percent load node.role master name
1 12 91 9.42 d - nod1
2 3 87 0.00 - - kibana
3 35 98 0.01 d - nod2
4 0 4 0.00 d m nod3
5 12 97 1.14 d - nod4
6 38 99 0.81 d - nod5
7 15 93 0.00 - m nod6
8 21 91 3.14 d - nod7
9 79 96 0.22 - * nod8
10 61 96 4.61 d m nod9

nod3 nod6 nod8 and nod9

Do you have minimum_master_nodes set to 3?

curl -XGET localhost:9200/_cluster/settings
{"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"enable":"all"}}}}}

I was referring to discovery.zen.minimum_master_nodes in elasticsearch.yml.

yes is 3

Excellent. Just wanted to check, as that can have interesting side effects...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.