Slow Metricbeats dashboard

Hello,
I'm currently using Metricbeats system dashboards to monitor machine hardware usage (CPU, memory ....etc). At first it was working fine when we had like 3 - 4 machines sending metrics, then as we installed metricbeats on 20+ machines the dashboards became really slow, sometimes it takes over 2 minutes to see the data for one machine, and sometimes it does not work at all.
What can I do to speed this up?
I was going to increase the number of shards per index (though I'm kinda new to shards and index lifecycle management) but this is a datastream ("metricbeat-8.13.2").
Also the index created is like 1.2GB.

Hi @SamehSaeed,

Could you help me to know below points:

  1. What is the Elastic version you are using? I assume its 8.13.2 as you mentioned.
  2. what is the hardware like Nodes, CPU , RAM ?

Hello @ashishtiwari1993 ,
1 - Version 8.13.2 as you mentioned.
2 - Hardware specs : (elasticsearch, kibana, logstash and metricbeats are all installed on the same machine running windows OS)
RAM : 32GB
CPU : Intel(R) Xeon(R) Platinum 8270 CPU @ 2.70GHz (16 processors)

JVM heap options ==>
Elasticsearch : 8GB
Kibana : 4GB (assigned, but only takes around 600MB in task manager)
Logstash : 6GB (we have around 200 pipelines scheduled to run every 5 minutes)

All 3 components are being monitored by metricbeats (I can see a 45-50 GB index created every 2 days from monitoring)

These are the indices created from monitoring machines ==>

Stack monitoring indices ==>

Ingesting data on one node cluster and reading at same time could add some latency.

Can you stop indexing and check if you getting same latency? I think you all cores are busy to writing data on primary shard.

Does this mean i need to stop metricbeats on all machines? we have around 35 right now so it's not possible.
is there another way to stop indexing?
Also, CPU usage is less than 30%

You can try to increase refresh_interval or disable it temporary. And see if your search query latency reducing or not.

Also you can check some common cause of slow query.

Thank you @ashishtiwari1993 , your comments have helped me alot.
It seems to be as you have suggested, RAM usage spikes when we have too many large indices (from elasticsearch/kibana monitoring) so it is related to shards. I have disabled monitoring modules in metricbeats for elasticsearch, kibana and logstash. But I can still see elasticsearch and kibana in stack monitoring tab so I keep deleting the [.monitoring-es-8-mb] datastream everyday.
Could elasticsearch be monitored through metricbeats running on another machine? I did not disable elk stack modules (elasticsearch-xpack) in metricbeats before running them on the 35 machines.

Hi @SamehSaeed, Is your production cluster and monitoring cluster are same ? Or you using completely different setup of ES, Kibana to monitoring your production cluster?

Hello @ashishtiwari1993 ,
Sorry for the late response, yes I'm using one machine for elasticsearch and elasticsearch monitoring (.94). Then I installed metricbeat on 35 other hosts to monitor hardware metrics. The problem is, I took the same folder from machine .94 and uploaded it to the other hosts, that is why they keep monitoring ELK stack.
Is there any way to disable this monitoring on all those machines at once? or should I log on every single one to disable monitoring.

by folder I mean metricbeat folder