I ran health check for my cluster. Its showing RED. I see there are plenty of unassigned shards. I tried restarting the  but it still wont remove the unassigned shards. I am using ES version 2.4
{ 
"cluster_name":"test-clu", 
"status":"red", 
"timed_out":false, 
"number_of_nodes":5, 
"number_of_data_nodes":2, 
"active_primary_shards":355, 
"active_shards":610, 
"relocating_shards":0, 
"initializing_shards":2, 
"unassigned_shards":110, 
"delayed_unassigned_shards":0, 
"number_of_pending_tasks":2, 
"number_of_in_flight_fetch":0, 
"task_max_waiting_in_queue_millis":20, 
"active_shards_percent_as_number":84.48753462603878 
}
Please suggest how can I get my cluster status back to "Green"
             
            
               
               
               
            
            
           
          
            
              
                dadoonet  
                (David Pilato)
               
              
                  
                    April 27, 2017,  9:40am
                   
                   
              2 
               
             
            
              Please format your code using </> icon as explained in this guide . It will make your post more readable.
Or use markdown style like:
```
CODE
```
 
It can take sometime before everything gets recovered. 
Check the pending tasks.
Probably you have to wait.
That said, out of curiosity, why do you have 5 nodes but only 2 data nodes?
             
            
               
               
               
            
            
           
          
            
            
              I am not sure why they have used 5 nodes and 2 data nodes. Started working in this recently.  I have the below pending tasks.
{ 
"tasks":[ 
{ 
"insert_order":1395911, 
"priority":"HIGH", 
"source":"shard-failed ([nedi-170313][3], node[hkXE-ZVWQju-8SCwaZGEWw], [R], v[155530], s[INITIALIZING], a[id=bD2SUxXXSkuUScOmDVrkuw], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-04-27T09:50:38.152Z], details[failed to create shard, failure ElasticsearchException[failed to create shard]; nested: NotSerializableExceptionWrapper[file_system_exception: /data/nedi-esdata-02/nedi-clu/nodes/0/indices/nedi-170313/3/index: Too many open files]; ]]), message [failed recovery]", 
"executing":true, 
"time_in_queue_millis":57, 
"time_in_queue":"57ms" 
}, 
{ 
"insert_order":1395912, 
"priority":"HIGH", 
"source":"cluster_reroute(async_shard_fetch)", 
"executing":false, 
"time_in_queue_millis":52, 
"time_in_queue":"52ms" 
}, 
{ 
"insert_order":1395913, 
"priority":"HIGH", 
"source":"shard-failed ([nedi-170309][0], node[hkXE-ZVWQju-8SCwaZGEWw], [R], v[147220], s[INITIALIZING], a[id=aTzdJcJXQrK5M3UR_-Svug], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-04-27T09:50:38.198Z], details[failed to create shard, failure ElasticsearchException[failed to create shard]; nested: NotSerializableExceptionWrapper[file_system_exception: /data/nedi-esdata-02/nedi-clu/nodes/0/indices/nedi-170309/0/index: Too many open files]; ]]), message [failed to create shard]", 
"executing":false, 
"time_in_queue_millis":15, 
"time_in_queue":"15ms" 
} 
] 
}
Please suggest how can I proceed as I am new to ElasticSearch
             
            
               
               
               
            
            
           
          
            
              
                dadoonet  
                (David Pilato)
               
              
                  
                    April 27, 2017, 11:14am
                   
                   
              4 
               
             
            
              You seem to be running low on ressources.
Either start new nodes, increase open files descriptors or remove non needed indices.
May be you have too much shards per index.
             
            
               
               
               
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    May 25, 2017, 11:16am
                   
                   
              5 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.