But this time I have a suspicion about indices.cache.filter.size limit that
might not be enforced.
The story is that filter_cache has grown beyond its limit up to 80% of the
total JVM heap instead of the 30% configured.
At one time a GC longer that 3x30s timeout make the node leave the cluster
[2013-10-22 07:16:40,459][INFO ][discovery.zen ] [sissor1]
master_left
[[sissor2][sBQ1oCTbRsGexVcQpu466Q][inet[/192.168.110.90:9300]]], reason
[failed to ping, tried [3] times, each with maximum [30s] timeout]
But this time I have a suspicion about indices.cache.filter.size limit
that might not be enforced.
The story is that filter_cache has grown beyond its limit up to 80% of
the total JVM heap instead of the 30% configured.
At one time a GC longer that 3x30s timeout make the node leave the cluster
[2013-10-22 07:16:40,459][INFO ][discovery.zen ] [sissor1]
master_left
[[sissor2][sBQ1oCTbRsGexVcQpu466Q][inet[/192.168.110.90:9300]]], reason
[failed to ping, tried [3] times, each with maximum [30s] timeout]
Is 256 Go/127.8 Go a typo for GB? You might be better off running two
instances on a single node, since giving Java more than 32GB of RAM is
detrimental.
But this time I have a suspicion about indices.cache.filter.size limit
that might not be enforced.
The story is that filter_cache has grown beyond its limit up to 80% of
the total JVM heap instead of the 30% configured.
At one time a GC longer that 3x30s timeout make the node leave the cluster
[2013-10-22 07:16:40,459][INFO ][discovery.zen ] [sissor1]
master_left [[sissor2][sBQ1oCTbRsGexVcQpu466Q][inet[/
192.168.110.90:9300]]], reason [failed to ping, tried [3] times, each
with maximum [30s] timeout]
I've never seen the filter cache limit not being enforced. If you can
provide supporting data, ie the filter cache size from the nodes_stats plus
the settings you had in place at the time, would be helpful.
I support Ivan's comment about heap size: the bigger the heap, the longer
GC takes. And using a heap above 32GB means the JVM can't use compressed
pointers. So better to run multiple nodes on one machine, using "shard
awareness" to ensure that you don't have copies of the same data on the
same machine.
On Friday, October 25, 2013 2:06:58 PM UTC+2, Clinton Gormley wrote:
I've never seen the filter cache limit not being enforced. If you can
provide supporting data, ie the filter cache size from the nodes_stats plus
the settings you had in place at the time, would be helpful.
I support Ivan's comment about heap size: the bigger the heap, the longer
GC takes. And using a heap above 32GB means the JVM can't use compressed
pointers. So better to run multiple nodes on one machine, using "shard
awareness" to ensure that you don't have copies of the same data on the
same machine.
ok i will think about it but the machine are in production ...
Has there been any updates to this? We are using nodes with 256GB of ram
and heap sizes of 96GB and also seeing this exact same issue where filter
cache sizes grow above the limit. What I also discovered was that when I
set the filter cache size to 31.9GB or lower the limit worked fine, but
anything above and it did not.
Thanks,
Daniel
On Friday, October 25, 2013 5:55:37 AM UTC-7, Benoît wrote:
Hi !
On Friday, October 25, 2013 2:06:58 PM UTC+2, Clinton Gormley wrote:
I've never seen the filter cache limit not being enforced. If you can
provide supporting data, ie the filter cache size from the nodes_stats plus
the settings you had in place at the time, would be helpful.
I support Ivan's comment about heap size: the bigger the heap, the longer
GC takes. And using a heap above 32GB means the JVM can't use compressed
pointers. So better to run multiple nodes on one machine, using "shard
awareness" to ensure that you don't have copies of the same data on the
same machine.
ok i will think about it but the machine are in production ...
For those who would come to this thread through a search engine, Dan found
the root cause of this issue
On Wed, May 21, 2014 at 8:03 PM, Daniel Low danglow@gmail.com wrote:
Hello,
Has there been any updates to this? We are using nodes with 256GB of ram
and heap sizes of 96GB and also seeing this exact same issue where filter
cache sizes grow above the limit. What I also discovered was that when I
set the filter cache size to 31.9GB or lower the limit worked fine, but
anything above and it did not.
Thanks,
Daniel
On Friday, October 25, 2013 5:55:37 AM UTC-7, Benoît wrote:
Hi !
On Friday, October 25, 2013 2:06:58 PM UTC+2, Clinton Gormley wrote:
I've never seen the filter cache limit not being enforced. If you can
provide supporting data, ie the filter cache size from the nodes_stats plus
the settings you had in place at the time, would be helpful.
I support Ivan's comment about heap size: the bigger the heap, the
longer GC takes. And using a heap above 32GB means the JVM can't use
compressed pointers. So better to run multiple nodes on one machine, using
"shard awareness" to ensure that you don't have copies of the same data on
the same machine.
ok i will think about it but the machine are in production ...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.