Elasticsearch performance tuning doubts

Hi,

I have a 5 node cluster running elasticsearch 7.0.0.

Indices: 700
Total size: 350GB (Primary storage excluding replicas)

Some of the biggest indices are around 2.5-3GB in size.

Now i have 1 primary shard and 1 replica for each indices. And on kibana it takes lot of time to load those big indices.

Does increasing the number of primary shards improve the search performance? (from kibana discover).

One big shard vs Many small shards, whose performance is better? (I'm talking about search performance here)

Thanks.

1 Like

Yes.

May I suggest you look at the following resources about sizing:

And https://www.elastic.co/webinars/using-rally-to-get-your-elasticsearch-cluster-size-right

I have a 5 node cluster running elasticsearch 7.0.0.
Indices: 700
Total size: 350GB (Primary storage excluding replicas)

After watching the videos, you will probably understand that in general you can have around 50gb per shard (it depends as usual) and at most 20 shards per gb of RAM.

Here you have 1400 shards on 5 nodes. 280 shards per node. Which means that you need at least 14gb of HEAP
But with 50gb per shard, most likely only 7 primaries are needed, so 14 shards including replicas. Which is around 3 shards per node.

That would require probably less memory. I'd say that 8 gb of HEAP could be enough.

Again, it depends. So you need to test that against your own scenarii. Look at the resources I linked to.

1 Like

Wonderful! Thank you so much for the detailed info @dadoonet

I would like to add a few clarifications. As described in this webinar the amount of heap used will depend on how you have indexed and mapped your data as well as how large and optimized your shards are. Larger shards often use less heap per document than smaller ones, which is why using large shards are generally recommended for efficiency.

The rule-of-thumb of 20 shards per GB of heap comes from users often having far too many small shards and once you reach this limit the system is generally still working well. This recommendation is however a maximum number of shards and not a level you necessarily should expect to be able to reach. If you are using large shards I would expect the number of shards per node to be lower than the prescribed limit.

I sometimes hear the recommendation interpreted as "I should be able to have 20 50GB shards per GB of heap" which is not correct. If this was the case a node could have 1TB of data per GB of heap, which is generally very hard to achieve, at least without using frozen indices or extensive optimizations.

Each query is executed single-threaded against each shard, although multiple shard queries are run in parallel. The optimal number of shards for query performance therefore depends on the number of concurrent queries that compete for resources as well as whether it is CPU, heap or disk I/O that is limiting performance. Having multiple shards can often be faster than having a single one, but if you have too many shards performance is likely top start deteriorating. Best way to find out is to benchmark with as realistic data and queries as possible.

@dadoonet @Christian_Dahlqvist I tried to implement some of the suggestions for performance but I haven't noticed a considerable performance improvements. Can you please tell me if am doing something wrong ? The new configurations as the compared to the old ones described above are:

Indices: 200 (525 primary shards, 525 replica shards)

Primary store size = 140GB (decreased from 220 GB after reidexing from 5.x cluster)

I have 5 nodes each with 16GB of JVM heap configured.

So, around 1000 shards (including replicas) in total cluster. ---> 200 shards/Node ----> 12 shards/GB of heap.

So I was expecting a very fast query results. But some queries take around 40-50 sec to complete.

I know i haven't considered docs count. But my each indices have only around 50k-60k docs.

I even ran 'forcemerge' on my old indices. That didn't help too.

That still sounds like far too many shards given the size of the data. If you had s total of 10 indices with a single primary shard each the average shard size would be 14GB which is quite reasonable.

Thanks for the suggestion @Christian_Dahlqvist

I reduced the number of primary shards to 70 (I can still reduce, but i have time based indices and i would like some granularity). But I'm not seeing any performance improvements while querying. Instead the the query time increased a bit.

Then i increased replicas from 1 to 2. The query time improved a bit. But increasing replicas is not preferred choice for me as the store size increases.

I have 8 CPU cores for each node. Which i think is enough.

i ran 'docker stats' and:

CONTAINER ID        NAME                        CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
345313a89d89        kibana                      0.73%               132.8MiB / 43.07GiB   0.30%               1.63GB / 210MB      210MB / 8.19kB      11
123410886856        elasticsearch               16.03%              19.02GiB / 43.07GiB   44.16%              1.44TB / 1.16TB     218GB / 1.52TB      125
345bc40c4c31        bbbbbbbbb                   0.04%               102.2MiB / 43.07GiB   0.23%               2.45MB / 656B       161MB / 0B          22
5678069c72f7        aaaaaaa                     0.66%               1.663GiB / 43.07GiB   3.86%               8GB / 1.51GB        586MB / 938MB       84
364067897a56        some_other_container        0.63%               315.6MiB / 43.07GiB   0.72%               102GB / 8.61GB      2.84GB / 164GB      44
456778b25f30        some_container              0.61%               481.8MiB / 43.07GiB   1.09%               307GB / 4.5GB       1.79GB / 457GB      46

What am i missing?

@Christian_Dahlqvist @dadoonet, I think the problem I was having with performance was because I has some large text fields (Large enough that they violated max field length! https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-7.0.html#_limiting_the_length_of_an_analyzed_text_during_highlighting)

I solved that length issue by indexing those large fields as term_vectors.

Now the question is, how to tune the query performance from kibana with these term_vector fields?

Is there a way I can exclude these large text fields when it is unnecessary? (But those large text fields should still remain searchable)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.