ELK version 6.2.4
Filebeat version 6.2.4
My entire ELK stack were working fine till 1.35 A IST as of 06/26/2018
Suddenly i can't see any logs in kibana from June-26-2018 1:40 AM IST onwards.
When i open kibana it shows an error: """Discover: Request to Elasticsearch failed: {"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[]},"status":503} ""
Elasticsearch logs :
[2018-06-26T03:53:14,851][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
Logstash logs:
[2018-06-26T04:22:08,564][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[index_itg1][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[index_itg1][0]] containing [2] requests]"})
[2018-06-26T04:22:08,564][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}
[2018-06-26T06:19:42,935][WARN ]
[o.e.g.DanglingIndicesState] [hc4t02634] [[my_index/LP9TzaErQHGNetl2RCVeWg]]
can not be imported as a dangling index, as index with same name already exists in cluster metadata
Could you share all elasticsearch logs? Formatted please.
Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.
Or use markdown style like:
```
CODE
```
This is the icon to use if you are not using markdown format:
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [myshost] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:36:34,468][DEBUG][o.e.a.s.TransportSearchAction] [myhost] All shards failed for phase: [query]
[2018-06-26T06:46:18,982][DEBUG][o.e.a.s.TransportSearchAction] [myhost] All shards failed for phase: [query]
[2018-06-26T03:53:14,851][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:18:08,982][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:18:57,272][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,577][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,578][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:00,578][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,582][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,582][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:21,583][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:23,428][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,276][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,277][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:37,277][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg1/kk65VvgQT5mWuq95J39x1Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[index_itg3/DZK6oxYvR9eOfOQOyZJiVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:19:42,935][WARN ][o.e.g.DanglingIndicesState] [my-host] [[my_index/LP9TzaErQHGNetl2RCVeWg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-06-26T06:36:34,468][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
[2018-06-26T06:46:18,982][DEBUG][o.e.a.s.TransportSearchAction] [my-host] All shards failed for phase: [query]
One of the other problem is the naming of your indices. Not critical but this shows clearly some misconfiguration of you beat component. Like:
%{[@metadata][beat]}-2017.08.29
Not sure what you are collecting.
What you can do in the short term, as I believe you are not really using those metrics:
DELETE %{[@metadata][beat]}-*
Note: it will remove all the data contained in index starting with %{[@metadata][beat]}- prefix. If you don't want to remove those indices or keep the latest, then be more restrictive in the name.
Indices logstash-*
You have some old index named logstash-2017.*. Looks like a test or something. I wonder if you are doing tests in production...
As those index are super old, I think you can probably remove them:
DELETE logstash-*
Other indices
You have some other indices. Not sure if they are useful or not...
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
red open jmeter_index jtdVlLeJS1yGSo_Q17xiew 5 1
red open index VXIO46xETPSEFy74RSC0Xw 5 1
yellow open ngqc-history-index yJ9-jxNqTumRxn7RFoj19Q 5 1 78479 0 65.4mb 65.4mb
red open ngqc-itg-index z7T5TjVKS4mSCtD0WWdsgw 5 1
red open my_index oDTbekUsS4a6z7kCK4kbQA 5 1
red open my_index_itg1 wj1ensRuSOmh8JG5qd3uRQ 5 1
red open ngqc-dev-index AXfGalkSSWqWSBV3kfDryA 5 1
red open my_index_6_2_4 eOFRJq01TKWPuiYesPzKMQ 5 1 13799 0 36.3mb 36.3mb
red open index_itg1 M_scwKgKRp6CEj-W8sbh-g 5 1
green open .kibana X4g_CEntRruH4a9Q0QTBBA 1 0 11 3 54.4kb 54.4kb
yellow open index_itg2 q-qoLyDkQbeqpdrqDGFG4Q 5 1 3030 0 5.7mb 5.7mb
red open index_itg3 DuTW7PlsRMuledCCB1nuRQ 5 1
Some index names are looking weird to me. Again, are you doing tests in production?
I'd probably remove the following ones:
red open jmeter_index jtdVlLeJS1yGSo_Q17xiew 5 1
red open index VXIO46xETPSEFy74RSC0Xw 5 1
red open my_index oDTbekUsS4a6z7kCK4kbQA 5 1
red open my_index_itg1 wj1ensRuSOmh8JG5qd3uRQ 5 1
red open ngqc-dev-index AXfGalkSSWqWSBV3kfDryA 5 1
red open my_index_6_2_4 eOFRJq01TKWPuiYesPzKMQ 5 1 13799 0 36.3mb 36.3mb
Then it remains:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open ngqc-history-index yJ9-jxNqTumRxn7RFoj19Q 5 1 78479 0 65.4mb 65.4mb
red open ngqc-itg-index z7T5TjVKS4mSCtD0WWdsgw 5 1
red open index_itg1 M_scwKgKRp6CEj-W8sbh-g 5 1
green open .kibana X4g_CEntRruH4a9Q0QTBBA 1 0 11 3 54.4kb 54.4kb
yellow open index_itg2 q-qoLyDkQbeqpdrqDGFG4Q 5 1 3030 0 5.7mb 5.7mb
red open index_itg3 DuTW7PlsRMuledCCB1nuRQ 5 1
In case you don't need either index_itg*, then remove them as well.
If you still need them, then do all the other removal I suggested and wait to see if they can come back or not. I doubt they will though.
If you can remove them, you will end up with:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open ngqc-history-index yJ9-jxNqTumRxn7RFoj19Q 5 1 78479 0 65.4mb 65.4mb
red open ngqc-itg-index z7T5TjVKS4mSCtD0WWdsgw 5 1
green open .kibana X4g_CEntRruH4a9Q0QTBBA 1 0 11 3 54.4kb 54.4kb
The "only" remaining problem would be then ngqc-itg-index index. If it can not come back automatically, you will probably have to remove it and reindex from your source.
Default number of shards
If you follow the video/links I provided, you will see that you probably don't need 5 shards / 1 replica. So change the default value to 1 shard instead of 5.
One node only
As you have only one node, don't set number of replicas to 1 as they will never be allocated.
curl -X DELETE "localhost:9200/{[@metadata][beat]}-2017.09*"
curl: (3) [globbing] nested braces not supported at pos 17
[ngq@xxx~]$ curl -X DELETE "localhost:9200/[@metadata][beat]-2017.09*"
curl: (3) [globbing] illegal character in range specification at pos 17
These indexes are hard to delete . can't able to delete.
can you please snuggest here!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.