How do I check what causing high JVM?

Hi,

We have been having some heavy load on our system, but I am not exactly sure what causing them. Just wondering is there a way to check what have caused the high JVM load in our system. I know the high JVM load can be cause by high number of shards, number of requests , queries etc. But without a tangible figures, it will be hard to understand what exactly causing the high load.

We have 5 data nodes, and each nodes with 64GB RAM, and 12 cores CPU.
And we have 114 indices, and 988 shards spread across 5 data nodes.

We are thinking to add 2 more data nodes, and then reallocate some of the indices into those nodes.

Hi there

Have JVM memory pressure above 75% is not a problem in itself, and often is cause by having too many shards (each shards [lucene index] allocate resources like memory).

Always you can add more nodes, but maybe the first question is why you need this amount of shards? maybe you need to request many queries and need parallelism?

Another principal origin of this heavy use of memory are expensive queries
You can check too how is the garbage collector behavior

Let me share with you a link to a elastic entry bog about to understand the memory pressure https://www.elastic.co/es/blog/found-understanding-memory-pressure-indicator/

and ... almost people know the limit of the JVM memory to elasticsearch is 32Gb (as you have) but on my personally experience I don config more than 30 because garbage collector is pretty playful (its my own think)

regards
Ale

I would agree with Alejandro here. Your average shard size is pretty small, I would look to increase that to reduce some of your heap use.

Hi,

Thanks for all the replies!
I noticed the number of shards for each index is a little bit high, just wondering how many shards is the best for our indices. I have snapshot the top 25 largest indices we have, the biggest index we have is 53m and 170.4G of documents, 2nd largest is 38.7m, and third largest is 13,8m The top 20 indices are more than 100k documents. And we are aiming to continuously break down the big index to an even smaller one, but I have some doubt, what is the reasonable size for each index. Please see my questions below..

  1. What is the reasonable size for each index to have good performance?
  2. How many shards are good for each index , for example some of the indices are over one 10 million, some are over 1 million, some are over 100k?
  3. How do I check where the replica shard is located, if I set my index only to have only one shard each, if the node that holds that primary index is down, will the replica be available?

/Kenneth

20-50GB is a good shard size.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.