Hi All,
We are using 15 node cluster with 12 GB Heap Size allocated per node. We have set bootstrap.mlockall: true to avoid memory swapping.
When I am monitoring cluster via Marvel plugin then it is showing that all nodes are using 70-75% of JVM memory so I digged into metrics to measure where this memory is being used. We don't have defined property "indices.fielddata.cache.size" in configuration so there is no limit for fielddata memory size from us. I was expecting that entire JVM memory must be used by fielddata but it is not the case.
I checked the fielddata memory usage by "GET /_nodes/stats/indices/fielddata" which is showing that each node is using 4 GB memory which is only 33% of total memory (12 GB) and count is evictions is 0.
Could anyone help me to identify how this memory is being utilized by ES?