Good morning,
I have several Elasticsearch clusters and they are giving me performance problems. I have located (I think) the problems but I need you to confirm if I am on the right track.
Basic configuration normally used:
x shards as ingestion nodes have and 1 replica of the data, 40GB indexes with rotation by size, a single indexset.
1º One indexset for all data (30TB or 200TB of data) all go to the same indexset
2nd Total number of shards in the clusters (if the index I normally set it to 40GB, 4-6-8 shards depending on the number of ingest nodes and the retention I need, sometimes I have 9.000 shards, 12.000 shards...
3º Normally I put both shards and ingestion nodes (to optimize the ingestion theoretically), but I do not merge the shards once I close the index, I have seen that it can be done with shrink or merge but normally it requires moving the data to another indexset and that For me right now it is a problem, is there a way to reduce the shards once the index is closed, leaving only the X shards in the deflector?
Normally each Elasticsearch node has 8 CPUs and 60Gb RAM dedicated and the total fields are around 800-1000 types. The environments are currently in Kubernetes, but I have environments without Kubernetes, Version 7.17.3.
I appreciate any help provided, I have the clusters closely monitored with Grafana and others so any data you need is at your disposal.
Thanks greetings!