Kibana Red Status


(None) #1

Hi, running 6.1.1

I see the following error in Kibana logs... The cluster is healthy and running and I can do searches against it no problem...

Unhandled rejection Error: Request Timeout after 30000ms
    at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:342:15
    at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:371:7)
Unhandled rejection Error: Request Timeout after 30000ms
    at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:342:15
    at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:371:7)
Unhandled rejection TypeError: Cannot read property 'result' of undefined
    at /usr/share/kibana/plugins/x-pack/plugins/monitoring/server/kibana_monitoring/lib/type_collector.js:90:50
    at Array.filter (native)
    at /usr/share/kibana/plugins/x-pack/plugins/monitoring/server/kibana_monitoring/lib/type_collector.js:90:29
    at next (native)
    at step (/usr/share/kibana/plugins/x-pack/plugins/monitoring/server/kibana_monitoring/lib/type_collector.js:18:202)
    at /usr/share/kibana/plugins/x-pack/plugins/monitoring/server/kibana_monitoring/lib/type_collector.js:18:383
    at tryCatcher (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/util.js:11:23)
    at Promise._settlePromiseFromHandler (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:489:31)
    at Promise._settlePromise (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:546:18)
    at Promise._settlePromise0 (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:591:10)
    at Promise._settlePromises (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:674:18)
    at Promise._fulfill (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:615:18)
    at MappingPromiseArray.PromiseArray._resolve (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise_array.js:125:19)
    at MappingPromiseArray._promiseFulfilled (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/map.js:97:18)
    at Promise._settlePromise (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:551:26)
    at Promise._settlePromise0 (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:591:10)
    at Promise._settlePromises (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:674:18)
    at Promise._fulfill (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:615:18)
    at Promise._settlePromise (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:559:21)
    at Promise._settlePromise0 (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:591:10)
    at Promise._settlePromises (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:674:18)
    at Promise._fulfill (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:615:18)
    at PropertiesPromiseArray.PromiseArray._resolve (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise_array.js:125:19)
    at PropertiesPromiseArray._promiseFulfilled (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/props.js:78:14)
    at Promise._settlePromise (/usr/share/kibana/plugins/x-pack/node_modules/bluebird/js/release/promise.js:551:26)
Unhandled rejection Error: Request Timeout after 30000ms
    at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:342:15
    at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:371:7)
    at ontimeout (timers.js:386:11)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

(None) #2

Also see this... At the time I was doing some perf testing with kafka/logstash and I noticed the cluster logs indicate that I exceeded the bulk queue off 200 with 204. But Kibana still hasn't recovered. I was still able to search in Kibana in the Discover tab, until I hit refresh on the browser and got the Red Status screen. Now it's just stuck... but cluster looks ok...

Unhandled rejection [export_exception] Exception when closing export bulk :: {"path":"/_xpack/monitoring/_bulk","query":{"system_id":"kibana","system_api_version":"6","interval":"10000ms"},"body":"{\"index\":{\"_type\":\"kibana_stats\"}}\n{\"concurrent_connections\":757,\"os\":{\"load\":{\"1m\":1.1865234375,\"5m\":1.77880859375,\"15m\":1.41650390625},\"memory\":{\"total_in_bytes\":67387469824,\"free_in_bytes\":50199109632,\"used_in_bytes\":17188360192},\"uptime_in_millis\":948341000},\"process\":{\"event_loop_delay\":67140.04897022247,\"memory\":{\"heap\":{\"total_in_bytes\":139718656,\"used_in_bytes\":122202456,\"size_limit\":1107296256},\"resident_set_size_in_bytes\":188874752},\"uptime_in_millis\":278774188},\"requests\":{\"disconnects\":0,\"total\":5997,\"status_codes\":{\"200\":4032,\"302\":3,\"304\":1952,\"401\":2,\"404\":6}},\"response_times\":{\"average\":30014,\"max\":30014},\"timestamp\":\"2018-01-20T04:34:11.219Z\",\"kibana\":{\"uuid\":\"a08adce5-461d-4219-bd2e-591c3793bbe5\",\"name\":\"kibana\",\"index\":\".kibana\",\"host\":\"5085bfa639b6\",\"transport_address\":\"0:5601\",\"version\":\"6.1.1\",\"snapshot\":false,\"status\":\"green\"},\"usage\":{\"dashboard\":{\"total\":0},\"visualization\":{\"total\":0},\"search\":{\"total\":0},\"index_pattern\":{\"total\":5},\"index\":\".kibana\",\"graph_workspace\":{\"total\":0},\"timelion_sheet\":{\"total\":0},\"xpack\":{\"reporting\":{\"available\":true,\"enabled\":true,\"_all\":0,\"csv\":{\"available\":true,\"total\":0},\"printable_pdf\":{\"available\":true,\"total\":0}}}}}\n","statusCode":500,"response":"{\"took\":68,\"errors\":true,\"error\":{\"type\":\"export_exception\",\"reason\":\"Exception when closing export bulk\",\"caused_by\":{\"type\":\"export_exception\",\"reason\":\"failed to flush export bulks\",\"caused_by\":{\"type\":\"export_exception\",\"reason\":\"bulk [default_local] reports failures when exporting documents\",\"exceptions\":[{\"type\":\"export_exception\",\"reason\":\"RemoteTransportException[[data][9.0.13.131:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportException[[data][9.0.13.131:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$7@5d14faab on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@27dbae0d[Running, pool size = 16, active threads = 16, queued tasks = 209, completed tasks = 9808083]]];\",\"caused_by\":{\"type\":\"es_rejected_execution_exception\",\"reason\":\"rejected execution of org.elasticsearch.transport.TransportService$7@5d14faab on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@27dbae0d[Running, pool size = 16, active threads = 16, queued tasks = 209, completed tasks = 9808083]]\"}}]}}}}"}
    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:295:15)
    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:254:7)
    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
    at IncomingMessage.bound (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
    at emitNone (events.js:91:20)
    at IncomingMessage.emit (events.js:185:7)
    at endReadableNT (_stream_readable.js:974:12)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickDomainCallback (internal/process/next_tick.js:128:9)

(Marius Dragomir) #3

Hello,
Is the queue still in the same state right now?


(None) #4

No. I have Kibana connecting through the coordinator node. I restarted the coordinator and Kibana started working again. I have to check when/if it happens. But I'm pretty sure the cluster was healthy and green.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.