[DEBUG][o.e.a.s.TransportSearchAction] [myhost] All shards failed for phase: [query]

ELK version 6.2.4
Filebeat version 6.2.4
My entire ELK stack were working fine till 1.35 A IST as of 06/26/2018

Suddenly i can't see any logs in kibana from June-26-2018 1:40 AM IST onwards.
When i open kibana it shows an error:
"""Discover: Request to Elasticsearch failed: {"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[]},"status":503} ""

Elasticsearch logs :
[2018-06-26T03:53:14,851][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]

Logstash logs:
[2018-06-26T04:22:08,564][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[index_itg1][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[index_itg1][0]] containing [2] requests]"})
[2018-06-26T04:22:08,564][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}

curl -XGET 'localhost:9200/_cluster/health/balance_sheet?pretty'
{
"cluster_name" : "ngq-elk-6.2.4",
"status" : "red",
"timed_out" : true,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0

and the disk space availbe is 86%, available storage though.

What is the cuase?
How can i resolve the issue. Please help me ASAP?

Could you share elasticsearch logs? Formatted please.

[myhostname] All shards failed for phase: [query]

[2018-06-26T06:19:42,935][WARN ]
[o.e.g.DanglingIndicesState] [hc4t02634] [[my_index/LP9TzaErQHGNetl2RCVeWg]]
can not be imported as a dangling index, as index with same name already exists in cluster metadata

Could you share all elasticsearch logs? Formatted please.

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

1 Like
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [myshost] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
   
 [2018-06-26T06:36:34,468][DEBUG][o.e.a.s.TransportSearchAction] [myhost] All shards failed for phase: [query]

[2018-06-26T06:46:18,982][DEBUG][o.e.a.s.TransportSearchAction] [myhost] All shards failed for phase: [query]

This is not all logs I'm afraid but just part of it.

May be upload all the log file to gist.github.com or to bintray?

[2018-06-26T03:53:14,851][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:18:08,982][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,577][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,578][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,578][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,582][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,582][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,583][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,276][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,277][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,277][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:36:34,468][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:46:18,982][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]

Do you have logs before 03:53:14 ?

the full log pasted here.

Those are the logs after you restarted at 2018-06-25T06:53:38,270.

[2018-06-25T06:53:38,270][INFO ][o.e.n.Node               ] [my-host] initializing ...

I was more asking for the logs that were written when the problem started. So before 2018-06-26T03:53:14,851. Before that line:

[2018-06-26T03:53:14,851][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]

Do you have that?

As you wrote:

My entire ELK stack were working fine till 1.35 A IST as of 06/26/2018
Suddenly i can't see any logs in kibana from June-26-2018 1:40 AM IST onwards.

I'd like to see the logs of what happened between 1.35 AM IST and 1:40 AM IST. That could help.

Can you also run:

GET _cat/health?v
GET _cat/indices?v
GET _cat/nodes?v
GET /_cat/pending_tasks?v

And paste every output (formatted please) here.

Please find the gist for
GET _cat/health?v
GET _cat/indices?v
GET _cat/nodes?v
GET /_cat/pending_tasks?v
@ "https://gist.github.com/vinayvanga1/c14bbf97729fcd956bff2b0a02591763"
under https://gist.github.com/vinayvanga1/c14bbf97729fcd956bff2b0a02591763#file-helath-checks

Didn't find any logs in between 2018-06-25T06:53:38,270 to 2018-06-26T03:53:14,851.

Hoping the issue raising all of this error??
[my-host] All shards failed for phase: [query]

So many problems here.

Number of shards

You have one node with around 2800 shards. A way too much.
Specifically with a heap size of 989.8mb.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

Indices %{[@metadata][beat]}-

One of the other problem is the naming of your indices. Not critical but this shows clearly some misconfiguration of you beat component. Like:

%{[@metadata][beat]}-2017.08.29

Not sure what you are collecting.

What you can do in the short term, as I believe you are not really using those metrics:

DELETE %{[@metadata][beat]}-*

Note: it will remove all the data contained in index starting with %{[@metadata][beat]}- prefix. If you don't want to remove those indices or keep the latest, then be more restrictive in the name.

Indices logstash-*

You have some old index named logstash-2017.*. Looks like a test or something. I wonder if you are doing tests in production...
As those index are super old, I think you can probably remove them:

DELETE logstash-*

Other indices

You have some other indices. Not sure if they are useful or not...

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
red    open   jmeter_index                    jtdVlLeJS1yGSo_Q17xiew   5   1                                                  
red    open   index                           VXIO46xETPSEFy74RSC0Xw   5   1                                                  
yellow open   ngqc-history-index              yJ9-jxNqTumRxn7RFoj19Q   5   1      78479            0     65.4mb         65.4mb
red    open   ngqc-itg-index                  z7T5TjVKS4mSCtD0WWdsgw   5   1                                                  
red    open   my_index                        oDTbekUsS4a6z7kCK4kbQA   5   1                                                  
red    open   my_index_itg1                   wj1ensRuSOmh8JG5qd3uRQ   5   1                                                  
red    open   ngqc-dev-index                  AXfGalkSSWqWSBV3kfDryA   5   1                                                  
red    open   my_index_6_2_4                  eOFRJq01TKWPuiYesPzKMQ   5   1      13799            0     36.3mb         36.3mb
red    open   index_itg1                      M_scwKgKRp6CEj-W8sbh-g   5   1                                                  
green  open   .kibana                         X4g_CEntRruH4a9Q0QTBBA   1   0         11            3     54.4kb         54.4kb
yellow open   index_itg2                      q-qoLyDkQbeqpdrqDGFG4Q   5   1       3030            0      5.7mb          5.7mb
red    open   index_itg3                      DuTW7PlsRMuledCCB1nuRQ   5   1                                                  

Some index names are looking weird to me. Again, are you doing tests in production?
I'd probably remove the following ones:

red    open   jmeter_index                    jtdVlLeJS1yGSo_Q17xiew   5   1                                                  
red    open   index                           VXIO46xETPSEFy74RSC0Xw   5   1                                                  
red    open   my_index                        oDTbekUsS4a6z7kCK4kbQA   5   1                                                  
red    open   my_index_itg1                   wj1ensRuSOmh8JG5qd3uRQ   5   1                                                  
red    open   ngqc-dev-index                  AXfGalkSSWqWSBV3kfDryA   5   1                                                  
red    open   my_index_6_2_4                  eOFRJq01TKWPuiYesPzKMQ   5   1      13799            0     36.3mb         36.3mb

Then it remains:

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   ngqc-history-index              yJ9-jxNqTumRxn7RFoj19Q   5   1      78479            0     65.4mb         65.4mb
red    open   ngqc-itg-index                  z7T5TjVKS4mSCtD0WWdsgw   5   1                                                  
red    open   index_itg1                      M_scwKgKRp6CEj-W8sbh-g   5   1                                                  
green  open   .kibana                         X4g_CEntRruH4a9Q0QTBBA   1   0         11            3     54.4kb         54.4kb
yellow open   index_itg2                      q-qoLyDkQbeqpdrqDGFG4Q   5   1       3030            0      5.7mb          5.7mb
red    open   index_itg3                      DuTW7PlsRMuledCCB1nuRQ   5   1                                                  

In case you don't need either index_itg*, then remove them as well.
If you still need them, then do all the other removal I suggested and wait to see if they can come back or not. I doubt they will though.

If you can remove them, you will end up with:

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   ngqc-history-index              yJ9-jxNqTumRxn7RFoj19Q   5   1      78479            0     65.4mb         65.4mb
red    open   ngqc-itg-index                  z7T5TjVKS4mSCtD0WWdsgw   5   1                                                  
green  open   .kibana                         X4g_CEntRruH4a9Q0QTBBA   1   0         11            3     54.4kb         54.4kb

The "only" remaining problem would be then ngqc-itg-index index. If it can not come back automatically, you will probably have to remove it and reindex from your source.

Default number of shards

If you follow the video/links I provided, you will see that you probably don't need 5 shards / 1 replica. So change the default value to 1 shard instead of 5.

One node only

As you have only one node, don't set number of replicas to 1 as they will never be allocated.

Where can i go and delete these Indices %{[@metadata][beat]}- and Indices logstash-*.
All i can find under the elastic Indices is :

drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 0a2_wvbiQxemqOzUZtjk-w
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 0g63Iud_QHKRB0OpDSP0ig
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 0nu6KC5vSK6TsN6hVBpMBA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 0PzdipZKTjOYD9rwWVo5tQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 0-sIDITEQXqqoBPRdsiljQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 1bWyulvxRXabr9MddiH2vw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 1huxhFQKT9GLkFuC8S49cA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 1lMB5MUGSsqx72ZAmAhz2w
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 1pTEk5Q3RwKu6ij1dZjmDQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 1U-JvhW5QZWQp_nmNzd_4w
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2AJhc1PFRjad1IGsYYOqdA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2fBy4jI_R6yZNO4mRdCJtQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2LwQuOilThmHlb8oultLXA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2lxQVkSKRS2ET18NGRIPnw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2OtFRkvSS4S2TpZxIylbJg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2oTkvuuoTjul4YR_Jg5ULA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2Sz-evvxSEOm1K-mI5f0PA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 2--w0h5ZTACUHiUiJ2fNvg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 39Ihav4rS8-PIH8s16RMSw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 3OXT07dUTkCWepQTIkxCeQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 3xpBm10XRTW6fe0aao2O8g
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 4CuJhGB2SLGdfZlokO0d7g
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 _4G3dRiVSrCBYPCnFO8_LA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 4UQhO8ZTQ2KvzxJsLup8Kw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 4wB8OXUiTlOzMcsbPOyT6A
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 5eGe2zqWSw2rpiyPLcFWKQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 5ntjnaF_TmKoAfcrXhtlHg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 5pcAREQ8QyyQToWTIKPRng
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 _5yt14gCSeGLMjjmTRe-Zw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 65qs7k3JRVuYzat9WGVOxw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 _6eeMKvzS524EQMETSxPgA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 6GOd65hxRgqTi_ZJl8Dt1Q
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 6mFRjh-XRC6D1QtNRiPK-A
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 6mhU_rk4RAiR3MgiMHZblw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 6P4IyneVSFG5nTT5rehhFg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 6PWw64J9QLqfaUE4_iMveQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 7a86dl13RxWlrzLYigccgQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 7P_h_AFeQ32MZhilF77BuA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 7TVc7gy3TUeSWq-iifFZUg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 8dvQIrP2SyqOmXMpCvI7yA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 8QA3ZxmkRuim95gF1nJJvQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 8QRVtc-gQdy9H5CfRhMwIg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 8ulvNPQDTdqAw1C71n6IPw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 -a1rr3ADQ8usX40DQLoVOA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 a3gllj99Siu_Fr2HRyiLRA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 abvCBWIuRUG3tevtQUqhIg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 AEMxRN-kQlKYzufGXCWeyg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 aiLvv3_9RJaPdd75THkRFg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 AqxENZOlSqehE0quqFKtXg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 ARCoWhzdTJ2FhYc1OGr2Pg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 asp3MeeJTfCz-tC-APeYXQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 atkdhgumQiubPwzBEgLWuw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 AuBCozriQMOsdZNdYFOEjg
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 AXfGalkSSWqWSBV3kfDryA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 az9c9pZCTL2hh4jkWcrf2w
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 AZXTae4TQseup9aS7NGNYw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 B0EdiS6bS9-fbT-Eb3OHQw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 B4RZJZMZTMSBJUK24jrSYA
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 bdF1o_P9S-ijHPidbkJU-g
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 BE1YXu2rQeS2ufq4QglQzQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 be_u-BiDTJCMmLPiuVXXaw
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 bI5BsOo6Qx--2DN5inJUlQ
drwxr-xr-x 3 ngq ngqgrp 4096 Jun 27 06:33 BkbnU3ZaRW6gYlJ3M2whaw

Please help me David.!!!

Use the Delete Index API.
Which you can easily do in Kibana Dev Console by just copying and pasting the code I shared.

1 Like
curl -X DELETE "localhost:9200/{[@metadata][beat]}-2017.09*"
curl: (3) [globbing] nested braces not supported at pos 17

[ngq@xxx~]$ curl -X DELETE "localhost:9200/[@metadata][beat]-2017.09*"
curl: (3) [globbing] illegal character in range specification at pos 17

These indexes are hard to delete . can't able to delete.
can you please snuggest here!

You need to URLencode May be the special characters like { [, or try with { ?

tried with { ?, to delete those indexes. No use.
Please advise.

Please share the exact commands you tried.
Did you try to encode the special characters?

Like " " is actually %20.

1 Like

Hi David,
Encoded the url, so able to delete those values.
You saved me, sir.
Thank you.

Now the indices looks like:

health status index              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   my_index_6.2       sc9vywdMRVupyjvjspZmZQ   5   1      55813            0      133mb          133mb
yellow open   my_index           LP9TzaErQHGNetl2RCVeWg   5   1      31733            0     68.1mb         68.1mb
red    open   my_index_itg1      wj1ensRuSOmh8JG5qd3uRQ   5   1                                                  
yellow open   index_itg3_6.2     5LTcOrrHRH6Jlg3QEZyJ8Q   5   1       2671            0      6.5mb          6.5mb
yellow open   index_itg3         DZK6oxYvR9eOfOQOyZJiVQ   5   1        786            0        2mb            2mb
green  open   .kibana            X4g_CEntRruH4a9Q0QTBBA   1   0         15            5     74.3kb         74.3kb
yellow open   ngqc_index_6.2     KwBtbLFnRq-WnNGnS9qDCg   5   1      13106            0     11.8mb         11.8mb
yellow open   ngqc-history-index yJ9-jxNqTumRxn7RFoj19Q   5   1      78827            0     65.9mb         65.9mb
yellow open   index_itg1_6.2     OPafUSYWQJ6eUMM9_qWUnw   5   1         31            0    406.8kb        406.8kb
red    open   my_index_6_2_4     eOFRJq01TKWPuiYesPzKMQ   5   1      18889            0     47.4mb         47.4mb
yellow open   index_itg2_6.2     CCQXnMLMSQCn47mC5fGGlw   5   1        527            0      1.3mb          1.3mb

So you still have problems with:

red    open   my_index_itg1      wj1ensRuSOmh8JG5qd3uRQ   5   1                                                  
red    open   my_index_6_2_4     eOFRJq01TKWPuiYesPzKMQ   5   1      18889            0     47.4mb         47.4mb

If they are not useful (based on their name I guess this is true), then remove them.

1 Like