Kibana latency display bug or real?

With a kibana instance 7.9.2 running ingest manager I have about 100 clients connected.

Looking in stack monitor everything appears fine CPU usage is low RAM is not under any pressure and disk io at storage and VM level is well within reason sub 3ms.

Stack Monitoring / Kibana / Overview
Client Response Time (ms) shows ~60000 max and avg or 45000.

Is this due to the elastic-endpoint check in times being set to time out after 1:30 or just a display bug? There is 0 issues in query's even running a 30 day one on packetbeat "from 8 machines..." which is a rather unrealistic for a day to day for us.

Under the hood agents connected to Ingest manager are doing long polling request to Kibana.

These request are made with a timeout of 60s to check for policy change, this can explain the values you saw in stack monitoring.

Perfect better to ask then assume.

So this leaves me to another question. Any chance of adding "Client delay" as a separate latency section on the main page. This would really help if you are forced to track down delays at a glance. I'll admit I was pretty stupid and chased that rabbit for a minute until I looked at the other Kibana instance on the same cluster only to see it didn't display it. Took a few seconds to remember that ingest isn't using load balancing atm. I know this question will come up often from other people that use it at least for us....

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.