Kibana server is not ready yet, all shards failed

Hi all,

After upgrading my ELK stack I am havinh some troubles with Kibana... The upgrade has been from 6.4.3 to 6.5.3.

I have update all the components and all are in the same version. Including the plugins. I have only one ES node and all the components are running in the same server. Logstatsh and ElasticSearch logs aren't reporting any error.

  kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-12-14 11:59:07 CET; 19min ago
 Main PID: 587 (node)
Tasks: 10 (limit: 4915)
   CGroup: /system.slice/kibana.service
       └─587 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Dec 14 12:01:29 pfesvielk001 kibana[587]: {"type":"error","@timestamp":"2018-12-14T11:01:29Z","tags":["warning","stats-collection"],"pid":587,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n    at emitNone (events.js:111:20)\n    at IncomingMessage.emit (events.js:208:7)\n    at endReadableNT (_stream_readable.js:1064:12)\n    at _combinedTickCallback (internal/process/next_tick.js:139:11)\n    at process._tickCallback (internal/process/next_tick.js:181:9)"},"message":"[search_phase_execution_exception] all shards failed"}
Dec 14 12:01:29 pfesvielk001 kibana[587]: {"type":"log","@timestamp":"2018-12-14T11:01:29Z","tags":["warning","stats-collection"],"pid":587,"message":"Unable to fetch data from kibana collector"}
Dec 14 12:01:29 pfesvielk001 kibana[587]: {"type":"error","@timestamp":"2018-12-14T11:01:29Z","tags":["warning","stats-collection"],"pid":587,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n    at emitNone (events.js:111:20)\n    at IncomingMessage.emit (events.js:208:7)\n    at endReadableNT (_stream_readable.js:1064:12)\n    at _combinedTickCallback (internal/process/next_tick.js:139:11)\n    at process._tickCallback (internal/process/next_tick.js:181:9)"},"message":"[search_phase_execution_exception] all shards failed"}
Dec 14 12:01:29 pfesvielk001 kibana[587]: {"type":"log","@timestamp":"2018-12-14T11:01:29Z","tags":["warning","stats-collection"],"pid":587,"message":"Unable to fetch data from canvas collector"}
Dec 14 12:01:29 pfesvielk001 kibana[587]: {"type":"error","@timestamp":"2018-12-14T11:01:29Z","tags":

Cluster Health:

curl -X GET "10.17.89.150:9200/_cluster/health?pretty"                                                                  {
  "cluster_name" : "ESVITESC",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1139,
  "active_shards" : 1139,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1122,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.37593984962406
}

Any lead? I tried different procedures that are posted in the FAQs or Blogs.

Thank you in advance.

When you say there is nothing in the logs do you mean nothing at all or nothing of note, as nothing at all suggests the logs aren't writing to where you think they are.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.