Today I upgraded from 0.19.3 to 0.20.3.
I also upgraded from jdk 1.6.0_14 to 1.7.0_13.
I did full cluster shutdown, replaced ES, then started the cluster.
While starting, some nodes (not only one) restarted by the wrapper.
Here's the wrapper log.
STATUS | wrapper | 2013/02/07 18:55:51 | JVM received a signal UNKNOWN (6).
STATUS | wrapper | 2013/02/07 18:55:51 | JVM process is gone.
ERROR | wrapper | 2013/02/07 18:55:51 | JVM exited unexpectedly.
STATUS | wrapper | 2013/02/07 18:55:55 | Launching a JVM...
INFO | jvm 2 | 2013/02/07 18:55:56 | WrapperManager: Initializing...
Once this happens, the number_of_nodes in /_cluster/health increases.
I have 19 nodes, then it goes 18 when a node is restarting, then goes to 20 (not 19).
elasticsearch/bin/service/elasticsearch.conf has jvm config like this:
wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseG1GC
wrapper.java.additional.5=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.6=-Djava.awt.headless=true
wrapper.java.additional.7=-XX:PermSize=256m
wrapper.java.additional.8=-XX:MaxPermSize=256m
wrapper.java.additional.9=-Dcom.sun.management.jmxremote
wrapper.java.additional.10=-Dcom.sun.management.jmxremote.port=3333
wrapper.java.additional.11=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.12=-Dcom.sun.management.jmxremote.ssl=false
Two questions:
what are the possible reason the wrapper says - JVM received a signal UNKNOWN (6)?
what does having more number_of_nodes in _cluster/health indicate?
Try removing the usage of G1 garbage collector, based on reports we get, its not stable yet. Prefer to stick with the default settings that come out of the box.
Also, if you can get away from using JMX, and using ES stats APIs, it would be even better
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.