Identiying how Memory is being used in ES

Hi All,

We are using 15 node cluster with 12 GB Heap Size allocated per node. We have set bootstrap.mlockall: true to avoid memory swapping.

When I am monitoring cluster via Marvel plugin then it is showing that all nodes are using 70-75% of JVM memory so I digged into metrics to measure where this memory is being used. We don't have defined property "indices.fielddata.cache.size" in configuration so there is no limit for fielddata memory size from us. I was expecting that entire JVM memory must be used by fielddata but it is not the case.

I checked the fielddata memory usage by "GET /_nodes/stats/indices/fielddata" which is showing that each node is using 4 GB memory which is only 33% of total memory (12 GB) and count is evictions is 0.

Could anyone help me to identify how this memory is being utilized by ES?

1 Like

Any update of above query?

What is default size of field data cache? is it same as total Heap size or not ?

In system logs, we are getting below logs which indicates that most of memory is used by old GC pool.

[2017-05-13 06:45:42,317][WARN ][monitor.jvm ] [node2] [gc][old][6851992][21024] duration [11.2s], collections [1]/[11.7s], total [11.2s]/[27.5m], memory [11.3gb]->[8.3gb]/[14.9gb], all_pools {[young] [450.6mb]->[49.3mb]/[532.5mb]}{[survivor] [66.5mb]->[0b]/[66.5mb]}{[old] [10.8gb]->[8.2gb]/[14.3gb]}

Can we do something to clear GC old pool memory?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.