Kibana not work, after elasticsearch cluster is red

2 of 3 data nodes are down. The cluster turns red, and kibana doesn't work. How to fix it?

Hi,

What does your Kibana logs say? Did you try restarting Kibana?

Also which node is Kibana connected to? Is it connected to the 2 of those nodes which are down?

Thanks.
Bhavya

I guess it is due to .kibana index lost

{"type":"error","@timestamp":"2018-10-28T20:44:27Z","tags":["warning","monitoring-ui","kibana-monitoring"],"pid":1260,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available f
or [get [.kibana][doc][config:6.3.0]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.3.0]: routing [null]] :: {"path":"/.kiba
na/doc/config%3A6.3.0","query":{},"statusCode":503,"response":"{\"error\":{\"root_cause\":[{\"type\":\"no_shard_available_action_exception\",\"reason\":\"No shard available for [get
[.kibana][doc][config:6.3.0]: routing [null]]\"}],\"type\":\"no_shard_available_action_exception\",\"reason\":\"No shard available for [get [.kibana][doc][config:6.3.0]: routing [null]]\"},\"s
tatus\":503}"}\n at respond (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:307:15)\n at checkRespForFailure (/home/pp_risk_rom_batch/kibana-6.3.0-linux
-x86_64/node_modules/elasticsearch/src/lib/transport.js:266:7)\n at HttpConnector. (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)\n
at IncomingMessage.bound (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit
(events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickDomainCallback (internal/process/next_tick.js:218:9
)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.3.0]: routing [null]]"}
{"type":"log","@timestamp":"2018-10-28T20:44:27Z","tags":["warning","monitoring-ui","kibana-monitoring"],"pid":1260,"message":"Unable to fetch data from kibana_settings collector"}
{"type":"error","@timestamp":"2018-10-28T20:44:37Z","tags":["warning","monitoring-ui","kibana-monitoring"],"pid":1260,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","nam
e":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{\"si
ze\":0,\"query\":{\"terms\":{\"type\":[\"dashboard\",\"visualization\",\"search\",\"index-pattern\",\"graph-workspace\",\"timelion-sheet\"]}},\"aggs\":{\"types\":{\
"terms\":{\"field\":\"type\",\"size\":6}}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all sha
rds failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}\n at respond (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/src/l
ib/transport.js:307:15)\n at checkRespForFailure (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:266:7)\n at HttpConnector. (/home/pp_risk_rom_b
atch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)\n at IncomingMessage.bound (/home/pp_risk_rom_batch/kibana-6.3.0-linux-x86_64/node_modules/elasticsearch/node_modules
/lodash/dist/lodash.js:729:21)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n

I delete the .kibana index. And run kibana again. It is normal now. However the charts, index pattern are all lost. Does it mean all kibana related settings/ configs/ charts/ index patterns are stored in .kibana index?
Thanks

Yes they are. We usually ask our users to take a snapshot of the data or export all saved objects before doing any delete operations. Hope this helps you? :slight_smile:

Thanks,
Bhavya

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.