Kibana won't start after server accidently restart

hello,
I use elk-docker to develop elasticsearch kibana in my office. it worked well. but after the server was accidently restart yesterday, the kibana can't show anymore

even my elasticsearch work well and can be access on localhost:9200
image

and here is my indeks status

can anyone help me

is this problem happened because .kibana_1 indeks status is red ?

Probably, what do your Kibana logs show?

{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:elasticsearch@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["license","info","xpack"],"pid":303,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:xpack_main@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:graph@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:searchprofiler@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:ml@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:tilemap@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:watcher@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:grokdebugger@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:logstash@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:beats_management@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:index_management@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:index_lifecycle_management@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:rollup@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:remote_clusters@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:cross_cluster_replication@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:file_upload@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:snapshot_restore@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["info","monitoring","kibana-monitoring"],"pid":303,"message":"Starting monitoring stats collection"}
{"type":"log","@timestamp":"2020-07-09T04:23:00Z","tags":["status","plugin:maps@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:01Z","tags":["reporting","warning"],"pid":303,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2020-07-09T04:23:01Z","tags":["status","plugin:reporting@7.4.0","info"],"pid":303,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-07-09T04:23:01Z","tags":["reporting","warning"],"pid":303,"message":"The Reporting plugin encountered issues launching Chromium in a self-test. You may have trouble generating reports: [Error: Failed to launch chrome!\n[0709/042301.594487:WARNING:resource_bundle.cc(358)] locale_file_path.empty() for locale \n[0709/042301.595975:FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux_suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.\n#0 0x561b63a69bf9 \n#1 0x561b639d6783 \n#2 0x561b639e9e23 \n#3 0x561b64ea82ee \n#4 0x561b639960f8 \n#5 0x561b64eae98c \n#6 0x561b6398fad1 \n#7 0x561b639d2c3d \n#8 0x561b639d29bb \n#9 0x7f84fcb70b97 __libc_start_main\n#10 0x561b61ed902a _start\n\nReceived signal 6\n#0 0x561b63a69bf9 \n#1 0x561b639d6783 \n#2 0x561b63a69781 \n#3 0x7f84fe2f9890 \n#4 0x7f84fcb8de97 gsignal\n#5 0x7f84fcb8f801 abort\n#6 0x561b63a685c5 \n#7 0x561b639ea107 \n#8 0x561b64ea82ee \n#9 0x561b639960f8 \n#10 0x561b64eae98c \n#11 0x561b6398fad1 \n#12 0x561b639d2c3d \n#13 0x561b639d29bb \n#14 0x7f84fcb70b97 __libc_start_main\n#15 0x561b61ed902a _start\n r8: 0000000000000000 r9: 00007ffcc0bd4ec0 r10: 0000000000000008 r11: 0000000000000246\n r12: 00007ffcc0bd6188 r13: 000027a2c7718b80 r14: 00007ffcc0bd6190 r15: 00007ffcc0bd6198\n di: 0000000000000002 si: 00007ffcc0bd4ec0 bp: 00007ffcc0bd5110 bx: 00007ffcc0bd5168\n dx: 0000000000000000 ax: 0000000000000000 cx: ffffffffffffffff sp: 00007ffcc0bd4ec0\n ip: 00007f84fcb8de97 efl: 0000000000000246 cgf: 0000000000000033 erf: 0000000000000000\n trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000\n[end of stack trace]\nCalling _exit(1). Core file will not be generated.\n\n\nTROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md\n]"}
{"type":"log","@timestamp":"2020-07-09T04:23:01Z","tags":["reporting","warning"],"pid":303,"message":"See Chromium's log output at "/opt/kibana/data/headless_shell-linux/chrome_debug.log""}
{"type":"log","@timestamp":"2020-07-09T04:23:01Z","tags":["reporting","warning"],"pid":303,"message":"Reporting plugin self-check failed. Please check the Kibana Reporting settings. Error: Could not close browser client handle!"}
{"type":"log","@timestamp":"2020-07-09T04:23:11Z","tags":["status","plugin:spaces@7.4.0","error"],"pid":303,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-07-09T04:23:11Z","tags":["fatal","root"],"pid":303,"message":"{ [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"graph-workspace\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.graph-workspace\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"space\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.space\":\"6.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"map\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.map\":\"7.4.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"task\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.task\":\"7.4.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.3.1\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.4.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/opt/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n at checkRespForFailure (/opt/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n at HttpConnector. (/opt/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n at IncomingMessage.wrapper (/opt/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/_count',\n query: {},\n body:\n { error:\n { root_cause: ,\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: },\n status: 503 },\n statusCode: 503,\n response:\n '{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}

is it right to show log like this sir ?

That is the cause. You would need to check your Elasticsearch logs to see why it's red.

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.