Does the size of a separate marvel cluster affect performance of the original cluster?

If I have a very heavily used cluster, let's say it's a 128gb cluster and I point the monitoring to a separate cluster, and the separate monitoring cluster is very small, say 2gb. If that starts slowing down and cpu is way high (over 100%, etc.) could that affect the original cluster in any way? Does the 128gb constantly try to write to the monitoring cluster or is the monitoring cluster trying to constantly READ from the bigger cluster?

Hi Brian,

I've moved this to the Marvel forum.

I'm pretty sure it's a the marvel agent on the main cluster pushing data to the monitoring cluster, but there may be backoffs in place to avoid bogging down the monitored cluster if the destination is having trouble.

Either way I suspect folks in the Marvel forum will know for sure.

(Thanks @matschaffer for moving)

Hi @Brian_G,

By default, we ship the monitoring data at a 10 second interval, which can be found under this setting:

Controls how often data samples are collected. Defaults to 10s. If you modify the collection interval, set the marvel.min_interval_seconds option in kibana.yml to the same value. Set to -1 to temporarily disable data collection. You can update this setting through the Cluster Update Settings API.

You can tweak this collection interval to be less frequent if you're concerned about performance. (If you're using 5.x, you can use xpack.monitoring.collection.interval setting to tweak the interval.)

Once your monitoring cluster has the monitoring data from the production cluster, point your Kibana instance to the monitoring cluster to read the monitoring data, and this will have no impact on your production cluster since it will query directly from your monitoring cluster.

Hope this helps,

Note that it will not be feasible to edit kibana.yml on Elastic Cloud, and if you make the interval to be less frequent, you may sometimes experience the monitoring dashboards to show spotty or empty charts depending on the level of zoom. This is due to the monitoring charts expecting the data granularity to be 10 seconds (by default without modifying kibana.yml), and if you make the interval to be bigger then the charts' default precision cannot be maintained.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.