Cannot refresh field list

Updated my template and can no longer refresh the field list in Kibana (we have ~ 610 fields). I get an internal server error occurred; status 500. I've restarted Kibana. Also upgraded to 6.4.0 about 14 days ago on all nodes. Everything else in Kibana appears to be working fine.

Not sure what is going on. Any help would be appreciated.

An internal server error occurred

Error: An internal server error occurred
at https://kibana:5601/bundles/commons.bundle.js:3:1225960
at processQueue (https://kibana:5601/bundles/vendors.bundle.js:197:199684)
at https://kibana:5601/bundles/vendors.bundle.js:197:200647
at Scope.$digest (https://kibana:5601/bundles/vendors.bundle.js:197:210409)
at Scope.$apply (https://kibana:5601/bundles/vendors.bundle.js:197:213216)
at done (https://kibana:5601/bundles/vendors.bundle.js:197:132715)
at completeRequest (https://kibana:5601/bundles/vendors.bundle.js:197:136327)
at XMLHttpRequest.requestLoaded (https://kibana:5601/bundles/vendors.bundle.js:197:135223)

Failed to load resource: the server responded with a status of 500 (Internal Server Error) :5601/api/index_patterns/_fields_for_wildcard?pattern=logstash-*&meta_fields=%5B%22_source%22%2C%22_id%22%2C%22_type%22%2C%22_index%22%2C%22_score%22%5D:1
:5601/bundles/commons.bundle.js:3 Detected an unhandled Promise rejection.
Error: An internal server error occurred

I've restarted kibana and elasticsearch with no luck.

On Kibana restart I see the below. It says another instance of Kibana may be running but that is not the case. Not sure why that log is showing up.

{"type":"error","@timestamp":"2018-09-14T07:04:54Z","tags":["warning","process"],"pid":25153,"level":"error","error":{"message":"Error: listen EADDRINUSE 0.0.0.0:5601\n at Object._errnoException (util.js:992:11)\n at _exceptionWithHostPort (util.js:1014:20)\n at Server.setupListenHandle [as _listen2] (net.js:1355:14)\n at listenInCluster (net.js:1396:12)\n at doListen (net.js:1505:7)\n at _combinedTickCallback (internal/process/next_tick.js:141:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)","name":"UnhandledPromiseRejectionWarning","stack":"UnhandledPromiseRejectionWarning: Error: listen EADDRINUSE 0.0.0.0:5601\n at Object._errnoException (util.js:992:11)\n at _exceptionWithHostPort (util.js:1014:20)\n at Server.setupListenHandle [as _listen2] (net.js:1355:14)\n at listenInCluster (net.js:1396:12)\n at doListen (net.js:1505:7)\n at _combinedTickCallback (internal/process/next_tick.js:141:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)\n at emitWarning (internal/process/promises.js:65:17)\n at emitPendingUnhandledRejections (internal/process/promises.js:109:11)\n at process._tickCallback (internal/process/next_tick.js:189:7)"},"message":"Error: listen EADDRINUSE 0.0.0.0:5601\n at Object._errnoException (util.js:992:11)\n at _exceptionWithHostPort (util.js:1014:20)\n at Server.setupListenHandle [as _listen2] (net.js:1355:14)\n at listenInCluster (net.js:1396:12)\n at doListen (net.js:1505:7)\n at _combinedTickCallback (internal/process/next_tick.js:141:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)"}

{"type":"error","@timestamp":"2018-09-14T07:04:54Z","tags":["warning","process"],"pid":25153,"level":"error","error":{"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)","name":"UnhandledPromiseRejectionWarning","stack":"Error: listen EADDRINUSE 0.0.0.0:5601\n at Object._errnoException (util.js:992:11)\n at _exceptionWithHostPort (util.js:1014:20)\n at Server.setupListenHandle [as _listen2] (net.js:1355:14)\n at listenInCluster (net.js:1396:12)\n at doListen (net.js:1505:7)\n at _combinedTickCallback (internal/process/next_tick.js:141:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)"},"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)"}

{"type":"log","@timestamp":"2018-09-14T07:04:54Z","tags":["fatal"],"pid":25153,"message":"Port 5601 is already in use. Another instance of Kibana may be running!"}

Found a work around for now. I extended the timeout in Kibana form 30 seconds to 60 seconds. Any suggestions for looking into finding the root cause of the slowness? I'll keep poking around and learning.

610 fields should not cause any major issues. Maybe the systems you are running ES on are in general to underscaled for your data? The question is, how long does the request against Elasticsearch itself take. Maybe you can check the Dev Console and execute:

GET /your-index-pattern/_mapping

and check how long that actually takes?

Takes about the same length of time, may a tad quicker, but still over 30 seconds. We have ~720 indices so not sure if its the number of inidicies in combination with the number of fields that is the problem. Undersized boxes, but they do have a lot of RAM; 32GB at the miinimum.

Learning ELK as I go so I have no doubt there is still a lot I do not know that I do not know.

Hi Brian,

I will move this to the Elasticsearch forum instead, and maybe someone there will be able to provide better help, since it seems it's not Kibana causing the delay, but already something within the indexes/setup that is causing this to run very long.

Cheers,
Tim

1 Like

Sorry, didn't get much sleep and left out a key part of the second to last sentence. I do not believe the boxes are undersized. Not finding any performance issues yet. All data nodes have 32GB RAM presently. Disk use seems to be well within perf constraints.

Not sure if this is good or bad, but looking at cluster state this is the size.

"compressed_size_in_bytes": 640078

We are also seeing the issue discussed here: Unable to write index pattern

Is your Kibana instance talking to Elasticsearch through a load balancer or a proxy (Nginx, HAProxy etc.)?

It isn't.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.