Optimal configuration of the ES cluster


My ES 7.8.0 cluster contains 5 nodes with 120 GB heap memory and 32 CPU in it. Also, I have pretty large index with 6.6kk docs and, according to _cat/indices/ API, with store.size 59.8gb and pri.store.size 30.1gb. Number of primary shards is 3 and replicas is 1. Is it optimal number for shards? I read that every shard should contain ~ 30GB of data

In the same time every node of my cluster is master, data and ingest - is it ok, or I must configure it like 3 master-nodes and 2 data-nodes? Currently, I'm ok with search speed and availability of the cluster. In addition to that periodically I'm catching warning in log file about

took [17.6s], which is over [10s], to compute cluster state update for [put-mapping

I think it is strange due to quite good resources that I gave to my cluster.

Thank you!

It looks like you either have very large or constantly expanding mappings, which is causing updates to be slow. How many fields does your index have? Have you overridden any of the default settings?

Are you you using parent-child or nested mappings?

I have 2 dynamic mappings with path match and no nested or parent-child mappings.
My settings endpoint returns

search": {
      "max_buckets": "100000",
      "default_keep_alive": "1m",
      "max_keep_alive": "5m"

Also my _mappings endpoint returns quite a big json due to dynamic mappings.

How many fields do you have in your mappings? Have you overridden the default limit?

"index.mapping.total_fields.limit": 200000

Yes, we have this index setting

That does not sound like a good setting value and will most likely cause problems. No wonder mapping changes result in slow cluster state updates as these are performed in a single thread.

I can think of no easy fix, so suspect you may either need to live with this or reconsider how you handle mappings to avoid this.

1 Like

Alright, got it, thank you for the answer!
Could you also please give advice about nodes roles and shards sizes for data like in my situation?

Your problem seems to be with the mappings and not necessarily with shard size or distribution. Have never seen a use case with even close to that level of mapped fields so have no advice to give. This is unchartered territory as far as I know.

1 Like