Search phase execution exception in kibana canvas

Hi ,
Using canvas in kibana tried to display metrics for a server but shows an error "[essql] > Unexpected error from Elasticsearch: search phase execution exception" .
Below is the message from kibana.stdout
"2019-06-20T08:02:02Z","tags":["error","task_manager"],"pid":22254,"message":"Failed to poll for work: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {"path":"/.kibana_task_manager/_doc/Maps-maps_telemetry/_update","query":{"if_seq_no":40,"if_primary_term":4,"refresh":"true"},"body":"{\"doc\":{\"type\":\"task\",\"task\":{\"taskType\":\"maps_telemetry\",\"state\":\"{\\\"runs\\\":1,\\\"stats\\\":{}}\",\"params\":\"{}\",\"attempts\":0,\"scheduledAt\":\"2019-05-27T04:27:32.931Z\",\"runAt\":\"2019-06-20T08:03:02.997Z\",\"status\":\"running\"},\"kibana\":{\"uuid\":\"979cbc12-fc31-443f-9583-0071fb272f4b\",\"version\":6070299,\"apiVersion\":1}}}","statusCode":403,"response":"{\"error\":{\"root_cause\":[{\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"}],\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"},\"status\":403}"}"}
{"type":"log","@timestamp":"2019-06-20T08:02:06Z","tags":["error","task_manager"],"pid":22254,"message":"Failed to poll for work: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {"path":"/.kibana_task_manager/_doc/Maps-maps_telemetry/_update","query":{"if_seq_no":40,"if_primary_term":4,"refresh":"true"},"body":"{\"doc\":{\"type\":\"task\",\"task\":{\"taskType\":\"maps_telemetry\",\"state\":\"{\\\"runs\\\":1,\\\"stats\\\":{}}\",\"params\":\"{}\",\"attempts\":0,\"scheduledAt\":\"2019-05-27T04:27:32.931Z\",\"runAt\":\"2019-06-20T08:03:06.022Z\",\"status\":\"running\"},\"kibana\":{\"uuid\":\"979cbc12-fc31-443f-9583-0071fb272f4b\",\"version\":6070299,\"apiVersion\":1}}}","statusCode":403,"response":"{\"error\":{\"root_cause\":[{\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"}],\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"},\"status\":403}"}"}

Looks like the user doesn't have enough permissions to do these queries. What user are you using to login and what roles do you have assigned?

Hi Marius,
We have not defined any user and roles as i see user, roles.yml and user_roles are empty

Did you enable security in Kibana or Elasticsearch?

Hi Marius,
We have not enabled security in kibana or elasticsearch

Then maybe your ES cluster ran out of space and marked your indices as read-only due to this. Can you check this?

Yes, couple of occasions filesystems were and we deleted the indices file to make more free space.
Since then we are facing this issue and first we faced this in May month and last time is 2 weeks back.

Please help to fix this issue as we are not seeing any data in kibana dashboards when we delete the indices to make more free space

You need to mark the index as writeable again.

PUT /index-name/_settings
{
  "index.blocks.read_only_allow_delete": null
}

Alternatively, you can look in index management at the index in cause (.kibana_task_manager) and change the setting there using the UI.

Hi Marius,

Ran the above command to make index writable and got the 404 error as shown below
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "index-name",
"index_uuid" : "na",
"index" : "index-name"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "index-name",
"index_uuid" : "na",
"index" : "index-name"
},
"status" : 404
}

Alternatively did not find the option .kibana_task_manager in UI index management.

Also noticed the health of few indexes are in yellow and showing the below message
Index lifecycle error

illegal_argument_exception: index.lifecycle.rollover_alias [oldpacketbeat] does not point to index [packetbeat-6.6.1-2019.06.23]
As we have index lifecycle policy called "datastream_policy" where Hotphase rollover is enabled for 7 days and 10 GB and then delete phase enabled for older 5 days from rollover date

I found the "edit settings" tab on particular index in index management section where i changed from "index.blocks.read_only_allow_delete": "true", to "index.blocks.read_only_allow_delete": "false"

Can you confirm is above way i did correctly

Hi Marius Dragomir,

Can i have an update on this as i need to fix this ASAP..

Thanks

@Marius_Dragomir,

Could you please let me how to housekeep the indices as i have created index rollover policy with hot,warm and cold phases

for 1 day each and after 7 index will deleted.

Here i am facing issue with index alias does not match with rollover policy error

Need an urgent help

Thanks

Hi,

As part of housekkeping, all indices in elastic search have been deleted and tried restart elasticsearch and kibana. However kibana is not running and kibana logs shows "503 HTTP" error

{"type":"log","@timestamp":"2019-07-31T07:36:51Z","tags":["fatal","root"],"pid":28159,"message":"{ [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"6.7.2\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: ,\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: },\n status: 503 },\n statusCode: 503,\n response:\n '{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}