When you go to Stack Monitoring you can click on the Kibana tab to get additional metrics.
There will be a "Request" and a "Connections" tally in the Overview tab
1a) I am wondering how are these officially counted?
Is a request a query? Something else?
Is a connection X amount of users that are logged in to Kibana? Something else?
1b) How can you limit client requests?
Does Kibana continue to count client request (and/or connections) if users do not properly log out of the system?
These stats are exposed, and actually collected from monitoring through the /api/stats endpoint. The metrics are collected from the webserver which is serving requests for Kibana, including static assets.
Requests are counted for anything served by Kibana, including static assets. If the browser is left open and there is any polling event on that page, it will continue to serve these assets and thus count those requests.
There is no way to "limit" requests at the Kibana level. What is it you're wanting to achieve, or why are you looking into limiting requests?
Thanks so much. The "request" clarification helps a lot. Do you by chance happen to have any more insight on how "Connections" are counted? Would that be a user logged in but not having a polling event open?
We have two clusters at two separate facilities. One never breaks; the other, breaks all the time.
We've tried a bunch of fixes related to Internal 500 errors/Unable to revive connection; but recently we noticed the cluster that breaks a lot has way more connections and requests. Were wondering if this could be causing Kibana to be overworked.
At this point even if this isn't a direct cause to the Internal 500 error we are curious how to clean it up (if we even can) and just understand these metrics better for general knowledge
The one thing that comes to mind would be to ensure that headers aren't being stripped and assets are still able to be cached by the browser.
I believe connections are in terms of the currently open connections which should be less than the requests due to keep-alive. Is there anything hitting this one instance that does not support keep-alive?
You stated: I believe connections are in terms of the currently open connections which should be less than the requests due to keep-alive.
However on our system our connections are much higher than the request. Connections are ~5x higher than requests. Is this indicative of anything in particular?
I was more thinking of it in the context that a single connection would have multiple requests, but when looking at a slice in time, that might not always be the case. Nothing of concern.
To validate what I was suggesting earlier regarding the browser cache, open "Developer Tools > Network" in your browser and navigate to one of the Kibana instances. Ensure that "Disable Cache" is UNCHECKED. After the page fully loads, refresh the page. You should notice that most of the assets are "cached" in the "Transferred" tab. "core.entry.js" is one such asset that should be cached. Then, check your other Kibana instance to ensure this is also the case there.
Regarding these two instances; How is traffic routed to them, do they both receive the same amount of traffic, or does it vary? Do you have any metrics to understand where the traffic is coming from and if they are equivalent?
Where the Kibana Instance works; they use Firefox. Went to Kibana and refreshed the page. It shows things are properly being cached. However I do not see the core.entry.js asset (not sure if that's a "problem")
Where the Kibana instance does not work; they use Chromium. When I go to Dev Tools -> Network there is no specific "Transferred" tab. However, I can refresh the page and look at the Headers for each asset. Looking at the Response Headers section, about 90% showed:
Does this information help or give any additional insight???
As far the clusters go, they are identical in configuration. Traffic is routed the same via Filebeats. Config and yaml file are structured the same. They have identical indices and index mappings. They were designed to mirror each other as much as possible.
The main difference is the site where Kibana DOES work is the location that receives ~10x more data. This is why seeing Kibana Requests and Connections at a much higher rate at the "smaller" location confusing- especially with much less data and the constant issues/errors.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.