following thread is close hence open new one.
I show 7.15.2 is out hence decided to test
in this setup I have Elasticsearch -> 7.15.1 and kibane 7.15.2.
Still very very slow. my dashboard which loads in 15 second takes more then 1.5 minute in this one with even less amount of data.
I will upgrade Elasticsearch to 7.15.2 and test it out. But I think something has gone wrong somewhere after 7.15 version.
Is the slow loading also an issue with other dashboards? We'd really need to see exactly what's causing the slow response to help you out here.
For starters, how many references does the dashboard have? You can use Kibana's Saved Objects UI to get the list: in Stack Management > Saved Objects, filter for type: Dashboard, select the dashboard that's loading slowly and click on the references option in the context menu:
You could then get a quick count of the number of saved object types in the dashboard:
The other thing we might want to check is the query refresh rate and the default time interval set for it.
You could try reducing the number of documents for that DB and/or increasing the refresh rate at which the DB is updated.
Hi @cheiligers ,
this is duplication of what I have in larger scale which works fine (Elasticsearch 7.15.1, kibana 7.12.0)
In dev I have 7.15.2 and kibana also same. Data size is 1/4 of what I have in production.
dashboad has no link visulization i.e all are unlink. these is test and has about 60 graphs(viz).
in production it takes only 20 second.
with 7.15.1, or 7.15.2 it takes more then 1.5 minute.
refresh rate is not changed that means it is default which I believe is 1 second.
I have spend countless hours to find this combination works. (Elasticsearch 7.15.1, kibana 7.12.0)
In this process I become expert in elk installation
@elasticforme The difference between 20 seconds and more than 1.5 min is worrying. Do you have the same hardware for both prod and dev?
60 graphs in a single dashboard is a lot but it doesn't seem to be a problem on your prod instance where Kibana's on 7.12.0. A few questions come to mind:
I wonder if we could compare the load times for a smaller dashboard, that has fewer visualizations in it without any refresh rate set (stop refreshing the dashboard query if it is enabled)? Depending on the type of visualizations in the dashboard, there could be a huge payload that's being transmitted every 'refresh-interval' time.
Are there a lot of fields in the index/indices being used for the visualizations?
If possible, could you provide debug logs for loading the dashboard in both prod and dev?
The logging config is:
- name: elasticsearch.query
- name: http.server
- name: metrics.ops
ok. give me a day or two. will change config and generate a log and send it out.
these are metricbeat data from hunderds of system. plus I am doing etl for process metric as well.
I had this running on five system cluster in 7.12.0 for (elastic and kibana) working fine. then we bought little better hardware and more quantity and I decided to put new 7.15.1 at that time. Result was very unpredictable. hence I had change many combination.
7.12.0 7.12.0 ( this worked.)
and hence I started testing more as I don't want to get stuck on old version.
and turn out that on new hardware
7.15.1 7.12.0 (worked better then older hardware) as new cluster has NVME storage and hence I was thinking I will speed up everything to double.
index pattern is default that comes with metricbeat. collecting cpu/memory/network/filesystem etc.. metrics per minute.
Couple additional thoughts.
perhaps Open up Chrome Dev Tools and Load the Dashboard and observed that timings of the loading to see if it all viz that are slower or just some... its a pretty easy way to get a sense of what is going on. Might shed a little light
The Other things is to open on of the Viz do inspect and look at the actual query response times and the round trip time and capture the Request query.
Then go to the query profiler and see where the time is taking up for 7.12 vs 7.15 Elasticsearch.
You should be able to get more information and compare what is different... Pick one of the viz that is taking longer to load.
I will go through this as well
Please do not forget, that there was a bug in 7.12.0 elastic. Especially when someone used spaces in kibana, searching was just a nightmare. Instead of seconds, everything took minutes. Bug was fixed in 7.12.1 , but TBH in 7.15.2 performance seems to be also very poor.
Glad someone has seen same problem. I was like searching everywhere and can't find any problem and thinking I might have problem in my setup. Try all kind of java adjustment as well without any luck.
Elasticsearch 7.15.1 Platinum
We also have similar problems. Elasticsearch is very slow after the upgrade.
Dashboards became very slow and in some, it pops up "This page is slowing down the browser..."
I have tried several browsers but I have the same behavior.
In addition in some dashboards, it is very laggy when you try to rearrange the visualizations on edit.
I hope someone is looking at from elastic and not carry forward this to new version. I am very overly loaded with work hence not got time yet to debug this out more.
I am going to try out 7.16.0 to see if it works better or not
Please inform us if the problem will be solved with 7.16.
I also had problem with 7.15.2 version, after update to 7.16.1 Kibana started work better. But we have some browser freezing when work with documents in json format (selecting some word or symbols with 1-2 delay)
Before 7.15.2 we had 7.5.2 and everything worked fast
We upgraded to 7.16.1 and it is faster than 7.15.1/2.
isn't it 7.16.2 allready out
Yes, but we haven't tried yet.
I upgraded complete stack to 7.16.2 and
my kibana is slow as it was in 7.15.x man something is not right somewhere. some setting they change after 7.12.x
it is going to be tough to find it
For us, we went directly from 7.10.1 to 7.15.1.
So 7.16.1 is similar to 7.10.1 in terms of speed and faster than 7.15.1.
But you are right I tested locally 7.12 and it was the fastest of all!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.