Elastic Cloud - Too many shards - Help to fix it

Hello,

We have definitely contracted Elastic Cloud. And we've started building integrations for IIS, which collect access and error logs, and IIS metrics as well.

When we had added about 5 such integrations, Elastic started crashing. JVM memory pressure was high and servers were constantly restarting. On the recommendation of the commercial part of Elastic, for the little use that we give to the platform, we start with the most basic servers, with 1GB of RAM. We were always told to look at the GB of space we would use. And we contracted Elastic Cloud precisely so as not to need people specialized in performance, since we don't have the people or the time for this learning.

After upgrading the machines to 2GB yesterday, Elastic support tells us that we have to improve performance as it is. Right now we have more than 400 shards and about 120 indexes. The deployment has been running for a week. We have read something about how to correct this, although it is not clear to us what we should do.

One of the recommendations is the ILM policy. But there are integrations that have been working for 2-3 days. What policy can we apply to something like that?

Thanks!

Here are our list of shards and indexes.

What is the nature of the performance problem you are trying to address?

1 Like

We have a lot of shards (now 462). And our machines have a JVM memory pressure too high. I don't know if it's possible to reduce the shards (as I say, we deployed Elastic Cloud a week ago). Thanks

The screenshot you shared above shows all memory usage as "Normal" so that seems fine? If it's sometimes high it's probably because the cluster is working hard and not really to do with the shard count - you're on 8.1.2 which is recent enough to be able to cope with more shards than older versions.

If you need help understanding high heap usage and cannot pin down the reason for it, best to open a support ticket and ask the support engineer to look at a heap dump.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.