ELK elasticsearch issues

Recently moved from 6.5.1 version to 6.5.2 and again back to 6.5.1 . Seems index got corrupted. Cannot get back previous dashboard. Can see below error from Elastic search container, after which kibana container crashes.

2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [lang-painless]
[2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [mapper-extras]
[2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [parent-join]
[2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [percolator]
[2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [rank-eval]
[2018-12-12T07:48:05,964][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [reindex]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [repository-url]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [transport-netty4]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [tribe]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-ccr]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-core]
[2018-12-12T07:48:05,965][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-deprecation]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-graph]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-logstash]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-ml]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-monitoring]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-rollup]
[2018-12-12T07:48:05,966][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-security]
[2018-12-12T07:48:05,967][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-sql]
[2018-12-12T07:48:05,967][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-upgrade]
[2018-12-12T07:48:05,967][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded module [x-pack-watcher]
[2018-12-12T07:48:05,967][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded plugin [ingest-geoip]
[2018-12-12T07:48:05,967][INFO ][o.e.p.PluginsService ] [OdXwL6G] loaded plugin [ingest-user-agent]
[2018-12-12T07:48:20,516][INFO ][o.e.x.s.a.s.FileRolesStore] [OdXwL6G] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2018-12-12T07:48:21,833][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [OdXwL6G] [controller/72] [Main.cc@109] controller (64 bit): Version 6.5.1 (Build 1c5fe241dd9aea) Copyright (c) 2018 Elasticsearch BV
[2018-12-12T07:48:28,500][INFO ][o.e.d.DiscoveryModule ] [OdXwL6G] using discovery type [single-node] and host providers [settings]
[2018-12-12T07:48:30,725][INFO ][o.e.n.Node ] [OdXwL6G] initialized
[2018-12-12T07:48:30,727][INFO ][o.e.n.Node ] [OdXwL6G] starting ...
[2018-12-12T07:48:31,027][INFO ][o.e.t.TransportService ] [OdXwL6G] publish_address {192.168.176.2:9300}, bound_addresses {0.0.0.0:9300}
[2018-12-12T07:48:32,166][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [OdXwL6G] publish_address {192.168.176.2:9200}, bound_addresses {0.0.0.0:9200}
[2018-12-12T07:48:32,167][INFO ][o.e.n.Node ] [OdXwL6G] started
[2018-12-12T07:48:34,009][WARN ][r.suppressed ] [OdXwL6G] path: /.reporting-/esqueue/_search, params: {index=.reporting-, type=esqueue, version=true}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:166) ~[elasticsearch-6.5.1.jar:6.5.1]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:152) ~[elasticsearch-6.5.1.jar:6.5.1]
at

If I point it to new indices folder everything works fine. Please help/provide suggestion to resolve the issue

Once you have upgraded, you can typically not go back without restoring a snapshot from before the upgrade. I would therefore recommend upgrading to 6.5.3.

same issues even if i get upgraded to 6.5.3, same errors i can see and as a result kibana container down after i see the same error

Kibana container error:
{"type":"error","@timestamp":"2018-12-17T06:03:03Z","tags":["fatal","root"],"pid":1,"level":"fatal","error":{"message":"all shards failed: [search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:139:11)\n at process._tickCallback (internal/process/next_tick.js:181:9)"},"message":"all shards failed: [search_phase_execution_exception] all shards failed"}
Unhandled rejection Error: No Living connections
at sendReqWithConnection (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:226:15)
at Object.utils.applyArgs (/usr/share/kibana/node_modules/elasticsearch/src/lib/utils.js:185:19)
at wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:5213:19)
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9)
Unhandled rejection Error: No Living connections
at sendReqWithConnection (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:226:15)
at Object.utils.applyArgs (/usr/share/kibana/node_modules/elasticsearch/src/lib/utils.js:185:19)
at wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:5213:19)
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9)

Elastic search container error:

[2018-12-17T06:09:50,258][INFO ][o.e.n.Node ] [OdXwL6G] started
[2018-12-17T06:10:00,022][WARN ][r.suppressed ] [OdXwL6G] path: /.reporting-/esqueue/_search, params: {index=.reporting-, type=esqueue, version=true}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:166) ~[elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:152) ~[elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:297) ~[elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.action.search.TransportSearchAction.lambda$doExecute$4(TransportSearchAction.java:193) ~[elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:114) [elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:87) [elasticsearch-6.5.3.jar:6.5.3]
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:215) [elasticsearch-6.5.3.jar:6.5.3]

seems figured out the issue, its because indexing takes time from elastic search container, so kibana containers waits and dies in the meantime. As if i run the docker kibana docker again everything runs smoothly. Can i know which env variables to set via docker for increasing timeout of Kibana. I tried below variables which i got form elastic.co posts in internet.

    # elasticsearch.shardTimeout: "60000"
    #ELASTICSEARCH_SHARDTIMEOUT: "60000"
    #elasticsearch.requestTimeout: "60000"
    #ELASTICSEARCH_REQUESTTIMEOUT: "60000"
    #elasticsearch.tribe.requestTimeout: "60000"
    #ELASTICSEARCH_PINGTIMEOUT: "60000"

Nothings seems to help.
Note: Also i do not want any extra health check from docker side. Request to please help on "How can i implement this using environment variables by increasing the timeout" ?

Why do you need to increase these timeouts? Are you overloading the cluster? What is the specification of the hardware Elasticsearch is running on? What kind of storage are you using?

vCPU :2, RAM:4gb, ubuntu instance. we are using for a demo project. Recently some team members added extra indices so timeout needs to be increased for kibana. All the containers are on the same machine for now.

anyways found an alternative way, had to configure healthcheck for elasticsearch once up, started kibana service based on the healthcheck. Seems working fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.