im seeing this consistently happen on only 1 host in my cluster. the other
hosts don't have this problem.. what could be the reason and whats the
remedy?
im running ES on a ec2 m1.xlarge host - 16GB ram on the machine and i
allocate 8GB to ES.
e.g.
[2014-02-25 09:14:38,726][WARN ][monitor.jvm ] [Lunatica]
[gc][ParNew][1188745][942327] duration [48.3s], collections [1]/[1.1m],
total [48.3s]/[1d], memory [7.9gb]->[6.9gb]/[7.9gb], all_pools {[Code
Cache] [14.5mb]->[14.5mb]/[48mb]}{[Par Eden Space]
[15.7mb]->[14.7mb]/[66.5mb]}{[Par Survivor Space]
[8.3mb]->[0b]/[8.3mb]}{[CMS Old Gen] [7.8gb]->[6.9gb]/[7.9gb]}{[CMS Perm
Gen] [46.8mb]->[46.8mb]/[168mb]}
Depends on a lot of things; java version, ES version, doc size and count,
index size and count, number of nodes.
What are you monitoring the cluster with as well?
im seeing this consistently happen on only 1 host in my cluster. the other
hosts don't have this problem.. what could be the reason and whats the
remedy?
im running ES on a ec2 m1.xlarge host - 16GB ram on the machine and i
allocate 8GB to ES.
e.g.
[2014-02-25 09:14:38,726][WARN ][monitor.jvm ] [Lunatica]
[gc][ParNew][1188745][942327] duration [48.3s], collections [1]/[1.1m],
total [48.3s]/[1d], memory [7.9gb]->[6.9gb]/[7.9gb], all_pools {[Code
Cache] [14.5mb]->[14.5mb]/[48mb]}{[Par Eden Space]
[15.7mb]->[14.7mb]/[66.5mb]}{[Par Survivor Space]
[8.3mb]->[0b]/[8.3mb]}{[CMS Old Gen] [7.8gb]->[6.9gb]/[7.9gb]}{[CMS Perm
Gen] [46.8mb]->[46.8mb]/[168mb]}
Is this node showing more activity than others? What kind of workload is
this, indexing/search? Are caches used, for filter/facets?
Full GC runs caused by "CMS Old Gen" may be a sign that you are close at
memory limits and need to add nodes, but it could also mean a lot of other
different things.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.