Resolution problems with kibana monitoring

Hi there,
we have noticed that unless we use the "Last 4 hours" resolution while using kibana's monitoring dashboards, we obtain some saw-like pattern.
At the same time some other plots will just show 0's when the time is too small ("last 1 hour").

Is there anyway to increase the sampling of the monitoring, to avoid these problems? Is there something else wrong?

Thanks for the help,

Jordi

Hi @jmartori
the monitoring collection interval can be controlled via xpack.monitoring.collection.interval parameter on each cluster node in the elasticsearch.yml file (for reference see: https://www.elastic.co/guide/en/elasticsearch/reference/7.4/es-monitoring-collectors.html)

You should also check the xpack.monitoring.min_interval_seconds field in kibana.yml file to be the same value (https://www.elastic.co/guide/en/kibana/current/monitoring-settings-kb.html)

If you are using MetricBeats to monitor your cluster, you should check the configuration of your MetricBeats

Hi @markov00,
So apparently I hadn't changed the min_interval_seconds, but I had increased the collection.interval to "60s".

I tried to update the value with:

PUT /_cluster/settings 
{
    "transient" : {
        "xpack.monitoring.min_interval_seconds" : "60s"
    }
}

But I get an error with the reason being,"persistent setting [xpack.monitoring.min_interval_seconds], not dynamically updateable"

I guess that that means that I can only change this configuration value by adding it to elasticsearch.yml, and restarting the nodes. Right?

So, I think there is a bit of confusion here because of the namings, I'm sorry for that.

If you have changed the xpack.monitoring.collection.interval (that is a dynamic settings in elasticsearch or through elasticsearch.yml) you should set the xpack.monitoring.min_interval_seconds option in kibana.yml to the same value xpack.monitoring.min_interval_seconds: 60

1 Like

That worked beautifully.
Thanks for the help,

Jordi

1 Like