We have a cluster with four elasticsearch nodes (v2.4), each having 4 cpu and 8gb of ram. At the moment, we have a load of about 500 messages a second.
Everytime we load a dashboard or use a query, the cpu usage is really heavy. Especially when increasing the timeframe. Well, that makes sense but I was wondering if it was normal with this cluster setup ? If yes, how much would we need to not have those heavy burning and lags on the web interface ?
Also, since elasticsearch can close indices which are several months old, and be reopened for long timeframe analysis (1 year for example), I was wondering if doing so would be dangerous for the cluster, since increasing the timeframe to 7 days is already puting all the CPU's to 100%.
Edit : I forgot to say that this is happening with only one user (being me) using the kibana web interface.
Kibana won't prevent you from making requests that will cause strain. Are you noticing high cpu on elasticsearch and kibana? On your local machine? Normal depends on how much data you're going through, and how complex your queries are. If you're running into performance issues it might be worth taking a close look at which visualizations/searches are causing issues and seeing if they can be simplified.
It's actually only the dashboards (no queries) , and one in particular. It contains only pie charts with no or small queries. However, it's true that this dashboard is dealing with kinda high inputs (11 million logs a day), so I guess the heavyness is normal ?
On another subject, would there be a way to keep some searches in cache so we don't need to reload it after the first one ?
Edit : forgot to answer your question. The high cpu usage is on the elasticsearch nodes only.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.