What is the motivation to specifically setup Xss JVM parameter in
elasticsearch startup script? It seems that during the time the Xss went
from 128k [1] up to 256k [2].
Is it that the default JVM value was causing troubles in some environments?
Consuming way too much memory per thread or something like that?
On the other hand this can hit ["The stack size specified is too
small"] anybody
who upgrades JVM without upgrading ES.
Elasticsearch starts with quite a bundle of thread pools, and there is also
netty using some threads, so IMHO everything should be tried to reduce
waste of memory, and thread stack size is one candidate.
If it is known how much memory the stack frames the threads will allocate
at most, you can assume it is safe to decrease the stack size limit under
certain additional assumptions.
The challenge is to find a good value, because the stack space consumption
is not really comparable over different CPU architectures and operating
systems. There is a vendor builtin stack size default in the stock JVM that
depends on the CPU architecture and the operating system. But are they
really suitable under all conditions? E.g. Solaris Sparc 64bit has a JVM
Xss default of 512k because of the larger address pointers, and Solaris x86
has 320k. Linux has lower limits down to 256k. Windows 32bit with Java 6
has a default of 320k and Windows 64bit even 1024m stack space.
Regards,
Jörg
On Tuesday, November 27, 2012 11:02:54 AM UTC+1, Lukáš Vlček wrote:
Hi,
What is the motivation to specifically setup Xss JVM parameter in
elasticsearch startup script? It seems that during the time the Xss went
from 128k [1] up to 256k [2].
Is it that the default JVM value was causing troubles in some
environments? Consuming way too much memory per thread or something like
that?
On the other hand this can hit ["The stack size specified is too small"] anybody
who upgrades JVM without upgrading ES.
thanks for detailed info. That is also my understanding of the situation.
On the other I was trying to point out that if you are running older ES
(like 0.18.x which comes with lower Xss value) and JVM is upgraded then ES
might suddenly not start complaining about the stack size being too small.
And this does not apply only to JDK7.
Fortunately, changes like that between JVM versions are probably quite rare.
Elasticsearch starts with quite a bundle of thread pools, and there is
also netty using some threads, so IMHO everything should be tried to reduce
waste of memory, and thread stack size is one candidate.
If it is known how much memory the stack frames the threads will allocate
at most, you can assume it is safe to decrease the stack size limit under
certain additional assumptions.
The challenge is to find a good value, because the stack space consumption
is not really comparable over different CPU architectures and operating
systems. There is a vendor builtin stack size default in the stock JVM that
depends on the CPU architecture and the operating system. But are they
really suitable under all conditions? E.g. Solaris Sparc 64bit has a JVM
Xss default of 512k because of the larger address pointers, and Solaris x86
has 320k. Linux has lower limits down to 256k. Windows 32bit with Java 6
has a default of 320k and Windows 64bit even 1024m stack space.
Regards,
Jörg
On Tuesday, November 27, 2012 11:02:54 AM UTC+1, Lukáš Vlček wrote:
Hi,
What is the motivation to specifically setup Xss JVM parameter in
elasticsearch startup script? It seems that during the time the Xss went
from 128k [1] up to 256k [2].
Is it that the default JVM value was causing troubles in some
environments? Consuming way too much memory per thread or something like
that?
On the other hand this can hit ["The stack size specified is too small"] anybody
who upgrades JVM without upgrading ES.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.