My question is that whether the field data cache is always in the JVM old memory or not ?
The configuration of the field data cache told that it is based on the heap memory but not the old memory, But in my cluster, A hurge agg query maybe let the JVM become a FullGC for a long time and the old memory grows bigger.
Should I set the field data cache based on the old memory size ?
the JVM decides in which part of the heap (young or old generation) it places an object and this also changes over time (most objects are allocated in the young generation and move to the old generation eventually if they survive enough garbage collection cycles). For the field data cache it is almost certain that it ends up in the old generation of the heap.
I assume by "configuration of the field data cache" you mean the setting
indices.breaker.fielddata.limit of the field data circuit breaker. You should define this setting based on how much heap you have available in total and independent of how large individual generations of the heap are. You should also not let your heap grow dynamically but instead predefine minimum and maximum heap size in
config/jvm.options (I'm assuming Elasticsearch 5 or better).
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.