0.5GB of extra heap will suffice for many reasonable workloads, and you may need even less if your workload is very light while heavy workloads may require more.
It's unfortunate that we cannot be more precise here, but it depends on so many other factors related to your workload. In your case, it sounds like you need to allow more than 0.5GB for your workload.
Why do you think the fielddata cache is the problem here?
Finaly I realized "total_deduplicated_mapping_size","total_estimated_overhead","extra heap for other overheads" and "fielddata cache size" are considerated for heap size.
Thanks, that seems like a compelling analysis. However I'm puzzled because the fielddata circuit breaker should prevent this, limiting the size of this cache to 40% of the heap by default. Do you know why it didn't? For instance, what does GET /_nodes/_all/stats/breaker?filter_path=nodes.*.breakers.fielddata report?
I checked it with my last environments (ES Service with heap 3g, fielddata.cache.size:1GB setting).
"GET /_nodes/_all/stats/breaker?filter_path=nodes.*.breakers.fielddata" returned below
That makes sense, but what about in the case where you don't set fielddata.cache.size? The "limit_size": "1.1gb" should still apply.
Alternatively, the heap dump you captured showed 2.5GiB of heap being retained by the cache. Is that reflected in the circuit breaker and/or the stats (which you can compute from all the org.elasticsearch.index.fielddata.ShardFieldData#perFieldTotals maps)? Or is Elasticsearch not tracking some of this memory usage?
i retested ES Service with heap 3g setting (no setting fielddata.cache.size).
ES returned about fielddata below before stopping. Maybe Fielddata circuit breaker didn't control it.
Thank you about fixing bug. but I need more.
As i said above, ES Service was loaded with heap 3g, fielddata.cache.size:1GB setting.
And when i add more index, heap is more required.. but, "total_deduplicated_mapping_size", "total_estimated_overhead" values are uppered little bit.
to check sufficient heap size, what i have to check more?
I don't have a good answer to this (at least nothing more specific than "your workload"). If you limit the fielddata cache to 1GiB, what else is consuming too much heap in your system?
I also use ES 5.6. the total number of org.elasticsearch.index.IndexService objects is equal to the number of open indexes In ES 5.6.
But the total number of org.elasticsearch.index.IndexService objects is equal to the number of all indexes In ES 8.6. why so many indexserves objects are loaded in the Service? Can i reduce the count for small heap size?
I'm not sure how this differs from your previous question. The behaviour in 5.6 that you describe was essentially due to a bug. There should be one IndexService for every index, open or closed, although the closed ones will be quite lightweight and won't load any field data.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.