I'm working with a 3 nodes cluster in a production environment. I've upgraded my cluster from 6.2.4 version to 6.7.1. I've noticed that loading data in the dashboard and the discover loading in the same time range has become slower than before, sometimes request goes out of time, now I've changed the request.timeout i Kibana config to 60s to visualize the data, but I want to understand the reason . The only thing that I have changed in my cluster config is that now I have 3 data nodes instead of 2. Is it a normal issue for index created in a previous version? Is there any config I have to look after to tune my loading?
Thanks for the answer. The dashboard match all the record in a time range, where I split by terms in an histogram. I can't see the difference of latency now, but the actual one is 50ms for the quoted query.
50ms latency does not seem high. We would need to know how low it was on 6.2.4 to be able to know if it's related to the stack upgrade of if its comes from your dynamic index that is growing.
Unfortunately the old monitoring data has been deleted from ES I can't see the old search latency. The search is on an time based index of two weeks ago.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.