I have gone through this discussion
Can I print out bitset cache sizes by filter (or what’s eating my heap)?
Specifically the reply by mvg Martijn Van Groningen
No, the bitset filter cache caches bitsets per nested field. (only nested fields that have a parent nested field gets cached)
The stats api just expose what the entire cache is taking.
If segments are removed by Lucene any cache entry associated with it is removed too.
This also applies for the bitset filter cache. But other then that there is no other mechanism that purges the bitset filter cache.
Can light be shed on the relationship between lucene segment and the cache filter clearing? Does it mean the only way to clear the cache is to actually delete the document?
Is this cache eviction solved in any of the new versions or do I need to request a new feature?
Can we also get what would be cached in these filters?
Unfortunately the solution given namely adding more memory could not be applied as we have already maxed out the heap on a node that can be allocated namely the 31 GB memory limit. We couldn't add more memory and the only way to clear is to restart the node.
I have added a memory dump of a smaller instance that shows the objects that needs to be cleared.
We found a way to clear this cache.
Using the blind clear cache
The other way to restrict the cache is to change the setting
ES cache is controlled by two settings (we don't set any value, hence the defaults):
indices.queries.cache.size (default in ES code is 10% of max heap)
indices.queries.cache.count (default in ES code to 10,000 unique query caches).
Changing the Cache Count to 1000 which is the default setting for after 5.3 helped to control how much memory was used for this type of cache