Hello,
Has there been any updates to this? We are using nodes with 256GB of ram
and heap sizes of 96GB and also seeing this exact same issue where filter
cache sizes grow above the limit. What I also discovered was that when I
set the filter cache size to 31.9GB or lower the limit worked fine, but
anything above and it did not.
Thanks,
Daniel
On Friday, October 25, 2013 5:55:37 AM UTC-7, Benoît wrote:
Hi !
On Friday, October 25, 2013 2:06:58 PM UTC+2, Clinton Gormley wrote:
I've never seen the filter cache limit not being enforced. If you can
provide supporting data, ie the filter cache size from the nodes_stats plus
the settings you had in place at the time, would be helpful.Output of _cluster/settings and _nodes/stats?all=true in the following
gist : nodes stats and cluster setting · GitHubThe value is not really high right now but 44.5gb is over 30% of commited
heap (127.8gb)"filter_cache": {
"memory_size": "44.5gb",
"memory_size_in_bytes": 47819287444,
"evictions": 0
},I support Ivan's comment about heap size: the bigger the heap, the longer
GC takes. And using a heap above 32GB means the JVM can't use compressed
pointers. So better to run multiple nodes on one machine, using "shard
awareness" to ensure that you don't have copies of the same data on the
same machine.ok i will think about it but the machine are in production ...
Regards
Benoît
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f5158eda-7e43-4b05-9f32-52bd3984c766%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.