ES 2.4 GC time % unmanageable. Any tip?

Hey there, I am trying to find the sweet spot for my application.
I got these results:

DATA nodes 8cpu-32gb ram with 16gb heap size and 8gb young heap size
GC time % is generally around 5% (this is in case the limit)

DATA nodes 8cpu-64gb ram with 16gb heap size and 8gb young heap size
GC time % is generally over 10%

In the second scenario, I doubled up the DATA's ram to try to handles more events. Instead, I am able to process lower events because DATA resources are stressed by GC time %.
All the surrounded details (input traffic, indices size, number of shards etc etc) are the same.
I read several docs on blog and also somewhere else but I cannot find a way to understand correctly this behaviour. I just know that I should find a way to decrease that GC time % value.

is there anyone that can help me? The ES version is so old, I know. I have another branch with latest one but values cannot be compared.

What is the full output of the cluster stats API? Do you have any non-default cluster or JVM settings?

Cluster is composed by 3 simple MASTER and 3 DATA.
No particular settings. Regarding JVM I am just specifying Xmx-Xms-Xmn
I cannot post the _stats output due to limited number of chars in the post :cry:

Then please make it available some other way, e.g. through a gist or using paste in or some similar tool.

_stats

https://paste.in/l0AIC4

_nodes/stats

https://paste.in/iXx0cw

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.