Huge monitoring-es indexes

Today my ES cluster on Elastic cloud frozen due to high CPU consumption. There was not too much pressure with requests so I took a look at indexes to check if there is some problem.
I found a lot of old indexes from apm that were almost empty but of course they took 1 shard for each one.
After a bit of cleaning I arrived to this point:

  • 213 indexes (only 10 are mine, the other are kibana and hidden indexes)
  • 21,135,526 documents
  • all indexes take 6GB of disk space
  • primary shards 213
  • 1 node 59.60GB free disk space, with 2GB RAM

This is a screenshot of the first 100 indexes:

It seems to mee that monitoring-es are becoming huge. In 3 days there are 3GB of indexes.

  1. Do you think is normal the size of monitoring-es*?
  2. I don't see any rolling policy on monitoring-es*. Should I create it?
  3. Could the sudden increase of those indexes one of the causes that hit the CPU?

Any advice is appreciated, thanks.

That sounds like a pretty reasonable size, yes. I don't think those indices use ILM as yet, but they will roll over per day which is why they are date named.

As for if they'd cause an issue, that's hard to say. What is the output from the _cluster/stats?pretty&human API?

Thanks for your reply. You are right, those indexes roll over per day. I see it keeps last 3 days.
This is the result of cluster stats: { "_nodes" : { "total" : 1, "successful" : 1, "failed" : 0 - Pastebin.com

Thanks

Hi @Daniele_Renda welcome to the community ....

A couple thoughts it is generally not best practice to send your cluster monitoring data to the same cluster as your search data / workload... Architecture principal of separation of concerns. See Here

I see from the pastebin that you are are running a Single 2GB RAM / 60GB SSD Node. That is a very small but functional cluster.

BUT I also noticed you have ~213 Indices with 213 Shards. That is a very high number of shards for such a small node. Generally we suggest fewer than 20 shards per 1GB of JVM Heap... you have 1GB JVM heap so you are 10x over the number of best practice shards.

You will most likely run into performance issues unless you reduce the number of indices / shards or increase the size of your nodes.

1 Like

Thanks Stephen, your suggestions are really appreciated. I know about the monitoring data and I had in plan to buy another cluster to send them.
About the 213 shards, I see what you mean but the point is I've just 10 indexes with 10 shards. The remaining 203 are indexes created from kibana and ES and are almost empty/not used.
Do you suggest to delete them? Could you suggest a secure index patter in order to remove only indexes that are not needed?

Except node monitoring I don't need to much else, I don't use Kibana if not for manager the cluster and see monitoring.

Thanks

If you aren't using them just delete them all.

Thanks, for sure I'm not using them but I'm not sure if they are needed from Kibana itself. I'm on Elastic cloud.
Where can I find a matrix of all ELK index patterns ? My fear is to delete some index that is used in this Elastic cloud configuration and to break something (more than 200 indexes are hidden and so system index, when I try to delete them I've a scary alert ) :grimacing:

Thanks

There's not, no.

You can safely delete .monitoring* though, your history will be gone but it'll recreate. The same with that Kibana log one.
Also if you're on 7.15 then you can look to delete things with an earlier version in the name.

Thanks, very clear and useful!!

@warkolm do you think it's safe to delete these indexes:

  • .siem-signals-default-000008 (I don't use siem BTW)

  • .kibana_1, .kibana_2, .kibana_3.... keeping only .kibana_7.15.0_001 (current Kibana's version)

  • .ds-.slm-history-5-2021.06.16-000005

Thanks

Yep!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.