Recently we have experienced some performance issues with our ElasticCloud Kibana deployment. Page take upwards of 30 seconds to load, whereas they previously loaded in 5 seconds and less.
Previously we had a number of
logstash-* indexes and a set of dashboards which queried them.
Recently, we added
filebeat-*, aggregated by an
*beat* index pattern and a dashboard which queried this. After this, the performance began degrading accross all dashboards, not just the new one. A few details:
- we have recently upgraded to
- the ElasticSearch is
AWS.DATA.HIGHIO.I3and the Kibana instance is
- the ElasticSearch deployment is "healthy" and has normal memory pressure and disk allocation
- under the performance metrics, the CPU load is close to 100% all the time
filebeat-*indexes are segmented by day
- all the
*beat*indexes have yellow health. I've not been able to find the root cause
filebeat-*indexes ingest mainly docker logs, where each entry has a huge number of fields
Could I have some guidance with how to troubleshoot this?