We are not receiving the log from elasticsearch to kibana

Hi

We are not receiving the log from elasticsearch to kibana, also i have shared in below kibana and elasticsearch log. Can you please help to fix this issue.

service versions:

  1. logstash-6.3.2
  2. kafka_2.12-2.2.0
  3. zookeeper-3.4.14
  4. elasticsearch-6.3.2
  5. kibana-6.3.2
  6. filebeat-6.3.2

Kibana Log:

May 13 14:08:53 ssc-kibana kibana: {"type":"error","@timestamp":"2019-05-13T18:08:53Z","tags":["warning","monitoring-ui","kibana-monitoring"],"pid":31226,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.reporting-*/_search","query":{"filter_path":"hits.total,aggregations.jobTypes.buckets,aggregations.objectTypes.buckets,aggregations.layoutTypes.buckets,aggregations.statusTypes.buckets"},"body":"{\"size\":0,\"aggs\":{\"jobTypes\":{\"terms\":{\"field\":\"jobtype\",\"size\":2}},\"objectTypes\":{\"terms\":{\"field\":\"meta.objectType.keyword\",\"size\":3}},\"layoutTypes\":{\"terms\":{\"field\":\"meta.layout.keyword\",\"size\":3}},\"statusTypes\":{\"terms\":{\"field\":\"status\",\"size\":4}}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:307:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:266:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)\n at IncomingMessage.bound (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickDomainCallback (internal/process/next_tick.js:218:9)"},"message":"[search_phase_execution_exception] all shards failed"}

Elasticsearch Log:

at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:210) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:189) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:725) [elasticsearch-6.3.2.jar:6.3.2]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
[2019-05-13T14:39:04,349][WARN ][r.suppressed ] path: /.kibana/doc/config%3A6.3.2, params: {index=.kibana, id=config:6.3.2, type=doc}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][config:6.3.2]: routing [null]]
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:143) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:128) ~[?:?]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137) [transport-netty4-6.3.2.jar:6.3.2]

The error is that all the Elasticsearch shard requests failed. What is the result of _cluster/health and _cat/indices?

Hi @tylersmalley

Yes you are right , How to check the result of _cluster/health and _cat /indices? can please help me to solve this issue.

You can make the request to the Elasticsearch API endpoints that I linked you. For example, if Elasticsearch was running locally it would be http://localhost:9200/_cat/indices

Can you please find in below logs ,i hope it's helpful.

I was run this command to got the below result :
curl http://139.99.112.202:9200/_cat/indices?v

red open sscffmpeglogs-2018.08.12 GJUeQ-EhRja4pbYX4z5rAw 5 1
yellow open sscffmpeglogs-2019.01.22 1inlCD6sTxSzmVCAA3qbgg 5 1 4764 0 3.8mb 3.8mb
red open sscweblogs-2018.10.09 Arlpzy28Sv62iC0y-KjSzw 5 1
green open .monitoring-kibana-6-2019.05.09 R4bxx-GNT92di6v772LHJw 1 0 8307 0 1.8mb 1.8mb
yellow open sscweblogs-2019.03.05 oF0EjFarTXmux-w-6Pw0jg 5 1 521336 0 182.2mb 182.2mb
yellow open sscffmpeglogs-2019.05.12 vXmlsbqtT_GFGnzYEOT1Dg 5 1 21646 0 10.3mb 10.3mb
yellow open sscwowzalogs-2019.05.06 xmCdSS8dRB6LEhQPdJxGyw 5 1 843 0 1.1mb 1.1mb
yellow open sscweblogs-2019.03.20 ODFQzibyRbKsqbjRaL-2uQ 5 1 639592 0 220.3mb 220.3mb
red open sscweblogs-2018.07.04 0lD3l07TS9umLab-L3YrmA 5 1
red open sscweblogs-2018.11.02 xuTN05r_QWmovM7jMGpw8w 5 1
yellow open sscwowzalogs-2019.01.28 KeyovaIpSR6-nnlTE-weZA 5 1 349 0 1.1mb 1.1mb
red open sscwowzalogs-2018.06.28 YeEpr4LkQuymkXwjUyOTrg 5 1
red open sscwowzalogs-2018.09.25 jdJxqyhpSNeu9nlN5j5pGA 5 1
yellow open sscweblogs-2019.03.03 j95qY1aTTXKoLyNqmgOA2A 5 1 725336 0 244.3mb 244.3mb
yellow open cradweblogs-2019.05.07 xkiB7JCCR5GN5YmhOLXQ3Q 5 1 1842934 0 566.7mb 566.7mb
yellow open sscffmpeglogs-2019.03.21 pgC4HID4Sw-2m9RFBgDYJw 5 1 11507 0 6.7mb 6.7mb
red open sscwowzalogs-2018.09.22 WjmTFgk7TSq-Ksd-QmnCLQ 5 1
yellow open sscwowzalogs-2019.03.13 qSXfPcXLSeKLqUS5xsWkiQ 5 1 8782 0 5mb 5mb
red open sscweblogs-2018.12.20 9a8580ZcTui8oORoTq7jdw 5 1
yellow open sscwowzalogs-2019.05.03 CcExirpuRva_jkUo7FatlQ 5 1 750 0 1.1mb 1.1mb
yellow open sscwowzalogs-2019.05.11 3LFSayioRk6XbpR8L0KVjg 5 1 787 0 1mb 1mb
red open sscweblogs-2018.12.06 kPYqY-7_Q4CHWbYgYxRwkw 5 1
yellow open sscweblogs-2019.03.13 OyVzyDWVQpGt6fE3uIdZmA 5 1 707400 0 237mb 237mb
yellow open sscffmpeglogs-2019.03.26 5vJ8HxPlSz2f7ZaaUlbLdg 5 1 11110 0 6.7mb 6.7mb
red open sscweblogs-2018.07.16 ab4krfRJTXGcpLBzU2G4Ow 5 1
red open sscweblogs-2018.10.12 wsREbrxiQgqgrRRwiJaVZw 5 1
yellow open sscweblogs-2019.03.17 pfXIhVzOR3GBS_gsPE--sQ 5 1 694735 0 234.6mb 234.6mb
yellow open sscweblogs-2019.02.18 v_0ACfJaQ8S-CG8KtfSmQA 5 1 400459 0 135.8mb 135.8mb
yellow open sscffmpeglogs-2019.05.01 gDH4AMwdRsO0pXtMq29AFw 5 1 11164 0 6.5mb 6.5mb
red open sscwowzalogs-2018.06.27 NQuXcoWmRNWH0xuTUJcOUw 5 1
red open sscweblogs-2018.06.28 aTYrW_ZLQhCu9EgDnhP4cg 5 1
red open sscweblogs-2018.07.28 kUNu4wqTR1On0wx6FC7apg 5
yellow open sscweblogs-2019.04.22 JD4HmNT8QtSvLar368QpOA 5 1 596054 0 202.3mb 202.3mb
red open sscweblogs-2018.10.25 f4kh18wJQYu_ElQuvU_ztw 5 1
yellow open sscffmpeglogs-2019.05.11 4aMyQv4ETGW3ao3FVJVfWg 5 1 9655 0 5.8mb 5.8mb
yellow open sscwowzalogs-2019.02.09 n_bxCWuTSJWXmsV5uiH40g 5 1 9569 0 5.2mb 5.2mb


curl http://139.99.112.202:9200/_cluster/health

{"cluster_name":"elasticsearch","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":4046,"active_shards":4046,"relocating_shards":0,"initializing_shards":4,"unassigned_shards":6876,"delayed_unassigned_shards":0,"number_of_pending_tasks":1,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":37.03093538348893}

Ok, so your cluster is red due to unassigned_shards - this needs to be resolved.

More information can be found in this post: https://www.elastic.co/blog/red-elasticsearch-cluster-panic-no-longer

You have far, far too many shards for a cluster that size and need to reduce that significantly. Please read this blog post for practical guidelines.

@Christian_Dahlqvist @tylersmalley

Yes, I can see too many shards, How to reduce?
Can please help me to solve this issue.

I would recommend reading the blog post Christian provided, as there is a section on managing shard size. You could also add more nodes to the cluster to better handle the shard size.

I am not sure how to best get a cluster with that many shards up. I would probably recommend trying to delete all indices from 2018 and before in order to reduce the number of shards and make the cluster operational. Once it is up you can start reducing the shard count by switching to monthly indices and reduce the number of primary shards of existing indices by reindexing them into monthly indices.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.