Kibana times out [5.5.x]

Kibana [5.5.x] times out most of the time with the following error:

Error: Request Timeout after 30000ms
at http://10.10.7.173:5601/bundles/kibana.bundle.js?v=15405:12:4431
at http://10.10.7.173:5601/bundles/kibana.bundle.js?v=15405:12:4852

Setup:

I have a cluster of 3 Elasticsearch Data Nodes, 3 Master nodes and one client node.
My kibana instance is pointing to the client node, which is part of the elastic cluster.

I have a dashboard which plots line graphs based on a few integer metrics. It works well when the number of samples or timewindow used is small. But if I set a relative timewindow to say last 7 days, and try to load the dashboard, I almost all the time get the above mentioned error in the UI with a Red Warning bar on top.

I dont see any of the data nodes or client nodes getting loaded during this time. there are about 500 Million records for the last 7 days through which its trying to build/refresh the dashboard.

These are what I see in the kibana log files:

"10.10.7.216","referer":"http://10.10.7.171:5601/app/kibana"},"res":{"statusCode":200,"responseTime":16,"contentLength":9},"message":"POST /es_admin/.kibana/dashboard/_search?size=1000 200 16ms - 9.0B"}
{"type":"log","@timestamp":"2017-08-08T23:20:39Z","tags":["error","elasticsearch","admin"],"pid":2262,"message":"Request error, retrying\nPOST http://10.10.7.173:9200/.kibana/config/_search => socket hang up"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:xpack_main@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:graph@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:reporting@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:security@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:searchprofiler@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:ml@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:tilemap@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:watcher@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","plugin:elasticsearch@5.5.1","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Request Timeout after 60000ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:09Z","tags":["status","ui settings","error"],"pid":2262,"state":"red","message":"Status changed from green to red - Elasticsearch plugin is red","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["error","elasticsearch","admin"],"pid":2262,"message":"Request error, retrying\nPOST http://10.10.7.173:9200/.kibana/config/_search => socket hang up"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:elasticsearch@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","ui settings","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Elasticsearch plugin is red"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:xpack_main@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:graph@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:reporting@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:security@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:searchprofiler@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:ml@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:tilemap@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:21:42Z","tags":["status","plugin:watcher@5.5.1","info"],"pid":2262,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 60000ms"}
{"type":"log","@timestamp":"2017-08-08T23:35:32Z","tags":["error","elasticsearch","admin"],"pid":2262,"message":"Request error, retrying\nPOST http://10.10.7.173:9200/.kibana/config/_search => socket hang up"}

The error seems to be "socket hang up"

-All cluster nodes are healthy (Green)
-Disk free is 7TB out of 8TB free
-I dont see CPU or JVM spiking up drastically.
-There are no proxies in between. All elastic nodes are on the same L2 subnet as Kibana host.

Any suggestions? I tried setting the timeout value to 60s but did not help.

Sorry, I'm not going to be super helpful here, but this is some of the stuff I would try:

  • Restart the Kibana server, if elasticsearch went down at some point it might have old sockets laying around that could be triggering the socket hang up errors you're getting
  • Ensure that there aren't any cgroup limits enforced on elasticsearch that might keep it from looking like it is maxing out capacity
  • Inspect the open sockets on the Kibana host and Elasticsearch hosts. It's possible that Kibana is trying to reuse old sockets and timing out because they are stuck in some way, lsof -U | fgrep node

Logs from elasticsearch could be helpful, but I trust you've checked them. Is the socket hang up error you show here the first sign that something is wrong? Are you sure this isn't just a symptom of some other issue upstream?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.