Exclamation Triangle errors intermittently

I have a Canvas display running in Chrome in kiosk mode. It runs 24/7. I have it refreshed every 15 seconds. The Canvas has about 60 expressions that it displays. Some text, a few image reveals, a few lines.

The display seldom shows with no exclamation point triangles but where the triangles are varies quite a lot. It seems that pretty much any of the fields may show as triangles at some point.

The desktop computer that has the display also runs elasticsearch and kibana locally. The machine is not showing any signs of overwork. Lots of memory and cpu cycles available.

How can I diagnose and fix the issue so that I have a clean, error-free, display, regularly?

Hey @kevinpd, since you are running kibana and elasticsearch locally can you see any errors in the log files when the expression fails to execute? If so, can you post the log here and we can take a look.

Does this help?
I did journalctl -u kibana to get this.

Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"error","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"size":10000,"ignore_unavailable":true,"filter_path":"hits.hits._source.canvas-workpad,-hits.hits._source.canvas-workpad.assets"},"body":"{\"query\":{\"bool\":{\"filter\":{\"term\":{\"type\":\"canvas-workpad\"}}}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"log","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"message":"Unable to fetch data from kibana collector"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"error","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{\"size\":0,\"query\":{\"terms\":{\"type\":[\"dashboard\",\"visualization\",\"search\",\"index-pattern\",\"graph-workspace\",\"timelion-sheet\"]}},\"aggs\":{\"types\":{\"terms\":{\"field\":\"type\",\"size\":6}}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"log","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"message":"Unable to fetch data from visualization_types collector"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"error","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"sort\":[{\"task.runAt\":\"asc\"},{\"_id\":\"desc\"}],\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"filter\":{\"term\":{\"_id\":\"oss_telemetry-vis_telemetry\"}}}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"log","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"message":"Unable to fetch data from maps collector"}
Nov 26 08:27:54 kevinpd-linux-02 kibana[4019]: {"type":"error","@timestamp":"2019-11-26T16:27:54Z","tags":["warning","stats-collection"],"pid":4019,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"sort\":[{\"task.runAt\":\"asc\"},{\"_id\":\"desc\"}],\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"filter\":{\"term\":{\"_id\":\"Maps-maps_telemetry\"}}}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n

This looks like some sort of configuration or failure issue with elasticsearch. Here is a stackoverflow question that has some troubleshooting tips to maybe try:

Thanks Tim.

I don't think that stackoverflow was helpful. I don't have any red shards showing but Canvas still has a few exclamation triangles.

One idea in that stackoverflow was to set number_of_replicas to 0. I tried that. It doesn't appear to make any difference. Still exclamation triangles.

@kevinpd, how many primary shards do you have? What version of elasticsearch/kibana are you running? Can you post the output of the Cluster health api?

Hi Tim,

Cluster Info:

http localhost:9200/_cluster/health
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 229
content-type: application/json; charset=UTF-8

{
"active_primary_shards": 11,
"active_shards": 11,
"active_shards_percent_as_number": 100.0,
"cluster_name": "elasticsearch",
"delayed_unassigned_shards": 0,
"initializing_shards": 0,
"number_of_data_nodes": 1,
"number_of_in_flight_fetch": 0,
"number_of_nodes": 1,
"number_of_pending_tasks": 0,
"relocating_shards": 0,
"status": "green",
"task_max_waiting_in_queue_millis": 0,
"timed_out": false,
"unassigned_shards": 0
}

Version Info:
http localhost:9200
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 311
content-type: application/json; charset=UTF-8

{
"cluster_name": "elasticsearch",
"cluster_uuid": "Ogd1cZYKTzaAWLCQX4Qitw",
"name": "******",
"tagline": "You Know, for Search",
"version": {
"build_date": "2019-04-05T22:55:32.697037Z",
"build_flavor": "default",
"build_hash": "b7e28a7",
"build_snapshot": false,
"build_type": "deb",
"lucene_version": "8.0.0",
"minimum_index_compatibility_version": "6.0.0-beta1",
"minimum_wire_compatibility_version": "6.7.0",
"number": "7.0.0"
}
}

Kibana 7.0.0

Thanks @kevinpd, there's nothing obvious in your configuration to me at least. Have you tried recreating the issue outside of Canvas? Do you notice any elasticsearch queries failing intermittently elsewhere? You may get some better answers if you post this in the elasticsearch forum, I don't think it is Canvas specific.

Thanks Tim.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.