Service heap size configuration

Hi all,
I have set up a two-node elasticsearch cluster (0.20.6) under centos and
have been managing it using the service wrapper. I need to increase the
heap size to deal with an increase in index count.

I have set the environment variables ES_MIN_MEM and ES_MAX_MEM to 1280 in
/etc/profiles. The command set|grep ES_ confirms that the values are set.

I have also set ES_HEAP_SIZE=1280 in the service wrapper's own
elasticsearch.conf.

Using bigdesk, Ι can see that the allocated heap has risen as expected.
Still, the memory used by es is kept under 1GB. That is, gc is still called
under the 1GB limit. Am I missing something?

Dionisis Koumouras

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I'm not familiar with bigdesk but I can tell you that java applications
don't fill their entire heap before garbage collecting for the first time.
They more stuff from pool to pool as they garbage collect.

The first thing I'd check to make sure your changes took effect is the
resident memory size of the application. I usually do ps aux | grep elasticsearch and check the sixth column. That should be around 120% of
what you set ES_HEAP_SIZE to.

When java applications run out of memory they spend forever doing full
garbage collections. You can check the number of full garbage collections
that your application has done as well as how much time it has spent on
those collections like so sudo jstat -gc <pid>. The FGC and FGCT columns
are what you are looking for if those are going up quickly your application
is out of heap space.

Nik

On Tue, Jul 23, 2013 at 7:05 AM, Dionisis Koumouras kumdio@gmail.comwrote:

Hi all,
I have set up a two-node elasticsearch cluster (0.20.6) under centos and
have been managing it using the service wrapper. I need to increase the
heap size to deal with an increase in index count.

I have set the environment variables ES_MIN_MEM and ES_MAX_MEM to 1280 in
/etc/profiles. The command set|grep ES_ confirms that the values are set.

I have also set ES_HEAP_SIZE=1280 in the service wrapper's own
elasticsearch.conf.

Using bigdesk, Ι can see that the allocated heap has risen as expected.
Still, the memory used by es is kept under 1GB. That is, gc is still called
under the 1GB limit. Am I missing something?

Dionisis Koumouras

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

thanks for the tips Nikolas.

Following your advice, ps reported that the jvm consumes 1152648 kB, less
than what it's supposed to, I guess. top also reports 1.1g as resident
memory.

I tried the nodes stats api. Here's a part of the output:

"jvm" : {
"timestamp" : 1374586439575,
"uptime" : "3 hours, 11 minutes, 54 seconds and 188 milliseconds",
"uptime_in_millis" : 11514188,
"mem" : {
"heap_used" : "856mb",
"heap_used_in_bytes" : 897619760,
"heap_committed" : "1.2gb",
"heap_committed_in_bytes" : 1307312128,
"non_heap_used" : "37.5mb",
"non_heap_used_in_bytes" : 39423472,
"non_heap_committed" : "58mb",
"non_heap_committed_in_bytes" : 60837888,

So, bigdesk is in par with the nodes stats api (I guess that's where it
gets the data it presents). Still, I'm confused since they both report
different amounts of memory from ps/top. If the ps reading is correct, it
would justify the fact that all garbage collections start under the
previous 1GB limit.

It could, of course, also mean that es does not need the extra memory just
yet. Garbage collections only start twice a minute and have lasted 14secs
during 3 hours of operation.

On Tue, Jul 23, 2013 at 3:54 PM, Nikolas Everett nik9000@gmail.com wrote:

I'm not familiar with bigdesk but I can tell you that java applications
don't fill their entire heap before garbage collecting for the first time.
They more stuff from pool to pool as they garbage collect.

The first thing I'd check to make sure your changes took effect is the
resident memory size of the application. I usually do ps aux | grep elasticsearch and check the sixth column. That should be around 120% of
what you set ES_HEAP_SIZE to.

When java applications run out of memory they spend forever doing full
garbage collections. You can check the number of full garbage collections
that your application has done as well as how much time it has spent on
those collections like so sudo jstat -gc <pid>. The FGC and FGCT columns
are what you are looking for if those are going up quickly your application
is out of heap space.

Nik

On Tue, Jul 23, 2013 at 7:05 AM, Dionisis Koumouras kumdio@gmail.comwrote:

Hi all,
I have set up a two-node elasticsearch cluster (0.20.6) under centos and
have been managing it using the service wrapper. I need to increase the
heap size to deal with an increase in index count.

I have set the environment variables ES_MIN_MEM and ES_MAX_MEM to 1280 in
/etc/profiles. The command set|grep ES_ confirms that the values are set.

I have also set ES_HEAP_SIZE=1280 in the service wrapper's own
elasticsearch.conf.

Using bigdesk, Ι can see that the allocated heap has risen as expected.
Still, the memory used by es is kept under 1GB. That is, gc is still called
under the 1GB limit. Am I missing something?

Dionisis Koumouras

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

14secs in 3 hours means everything is a-ok! You may want to check ps aux | grep elasticsearch and see if you are missing a -Xms parameter that
matches your -Xmx. I generally like it better when they match because the
application allocates all the heap up front so there are less surprises.

On Tue, Jul 23, 2013 at 10:18 AM, Dionisis Koumouras kumdio@gmail.comwrote:

thanks for the tips Nikolas.

Following your advice, ps reported that the jvm consumes 1152648 kB, less
than what it's supposed to, I guess. top also reports 1.1g as resident
memory.

I tried the nodes stats api. Here's a part of the output:

"jvm" : {
"timestamp" : 1374586439575,
"uptime" : "3 hours, 11 minutes, 54 seconds and 188 milliseconds",
"uptime_in_millis" : 11514188,
"mem" : {
"heap_used" : "856mb",
"heap_used_in_bytes" : 897619760,
"heap_committed" : "1.2gb",
"heap_committed_in_bytes" : 1307312128,
"non_heap_used" : "37.5mb",
"non_heap_used_in_bytes" : 39423472,
"non_heap_committed" : "58mb",
"non_heap_committed_in_bytes" : 60837888,

So, bigdesk is in par with the nodes stats api (I guess that's where it
gets the data it presents). Still, I'm confused since they both report
different amounts of memory from ps/top. If the ps reading is correct, it
would justify the fact that all garbage collections start under the
previous 1GB limit.

It could, of course, also mean that es does not need the extra memory just
yet. Garbage collections only start twice a minute and have lasted 14secs
during 3 hours of operation.

On Tue, Jul 23, 2013 at 3:54 PM, Nikolas Everett nik9000@gmail.comwrote:

I'm not familiar with bigdesk but I can tell you that java applications
don't fill their entire heap before garbage collecting for the first time.
They more stuff from pool to pool as they garbage collect.

The first thing I'd check to make sure your changes took effect is the
resident memory size of the application. I usually do ps aux | grep elasticsearch and check the sixth column. That should be around 120% of
what you set ES_HEAP_SIZE to.

When java applications run out of memory they spend forever doing full
garbage collections. You can check the number of full garbage collections
that your application has done as well as how much time it has spent on
those collections like so sudo jstat -gc <pid>. The FGC and FGCT columns
are what you are looking for if those are going up quickly your application
is out of heap space.

Nik

On Tue, Jul 23, 2013 at 7:05 AM, Dionisis Koumouras kumdio@gmail.comwrote:

Hi all,
I have set up a two-node elasticsearch cluster (0.20.6) under centos and
have been managing it using the service wrapper. I need to increase the
heap size to deal with an increase in index count.

I have set the environment variables ES_MIN_MEM and ES_MAX_MEM to 1280
in /etc/profiles. The command set|grep ES_ confirms that the values are
set.

I have also set ES_HEAP_SIZE=1280 in the service wrapper's own
elasticsearch.conf.

Using bigdesk, Ι can see that the allocated heap has risen as expected.
Still, the memory used by es is kept under 1GB. That is, gc is still called
under the 1GB limit. Am I missing something?

Dionisis Koumouras

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The Java heap is broken into different generations: Perm, Old and Young. GC
can happen at different times for different generations. You can spend days
learning about JVM heaps. As Nikolas said: you are a-ok.

--
Ivan

On Tue, Jul 23, 2013 at 7:33 AM, Nikolas Everett nik9000@gmail.com wrote:

14secs in 3 hours means everything is a-ok! You may want to check ps aux | grep elasticsearch and see if you are missing a -Xms parameter that
matches your -Xmx. I generally like it better when they match because the
application allocates all the heap up front so there are less surprises.

On Tue, Jul 23, 2013 at 10:18 AM, Dionisis Koumouras kumdio@gmail.comwrote:

thanks for the tips Nikolas.

Following your advice, ps reported that the jvm consumes 1152648 kB, less
than what it's supposed to, I guess. top also reports 1.1g as resident
memory.

I tried the nodes stats api. Here's a part of the output:

"jvm" : {
"timestamp" : 1374586439575,
"uptime" : "3 hours, 11 minutes, 54 seconds and 188 milliseconds",
"uptime_in_millis" : 11514188,
"mem" : {
"heap_used" : "856mb",
"heap_used_in_bytes" : 897619760,
"heap_committed" : "1.2gb",
"heap_committed_in_bytes" : 1307312128,
"non_heap_used" : "37.5mb",
"non_heap_used_in_bytes" : 39423472,
"non_heap_committed" : "58mb",
"non_heap_committed_in_bytes" : 60837888,

So, bigdesk is in par with the nodes stats api (I guess that's where it
gets the data it presents). Still, I'm confused since they both report
different amounts of memory from ps/top. If the ps reading is correct, it
would justify the fact that all garbage collections start under the
previous 1GB limit.

It could, of course, also mean that es does not need the extra memory
just yet. Garbage collections only start twice a minute and have lasted
14secs during 3 hours of operation.

On Tue, Jul 23, 2013 at 3:54 PM, Nikolas Everett nik9000@gmail.comwrote:

I'm not familiar with bigdesk but I can tell you that java applications
don't fill their entire heap before garbage collecting for the first time.
They more stuff from pool to pool as they garbage collect.

The first thing I'd check to make sure your changes took effect is the
resident memory size of the application. I usually do ps aux | grep elasticsearch and check the sixth column. That should be around 120% of
what you set ES_HEAP_SIZE to.

When java applications run out of memory they spend forever doing full
garbage collections. You can check the number of full garbage collections
that your application has done as well as how much time it has spent on
those collections like so sudo jstat -gc <pid>. The FGC and FGCT columns
are what you are looking for if those are going up quickly your application
is out of heap space.

Nik

On Tue, Jul 23, 2013 at 7:05 AM, Dionisis Koumouras kumdio@gmail.comwrote:

Hi all,
I have set up a two-node elasticsearch cluster (0.20.6) under centos
and have been managing it using the service wrapper. I need to increase the
heap size to deal with an increase in index count.

I have set the environment variables ES_MIN_MEM and ES_MAX_MEM to 1280
in /etc/profiles. The command set|grep ES_ confirms that the values are
set.

I have also set ES_HEAP_SIZE=1280 in the service wrapper's own
elasticsearch.conf.

Using bigdesk, Ι can see that the allocated heap has risen as expected.
Still, the memory used by es is kept under 1GB. That is, gc is still called
under the 1GB limit. Am I missing something?

Dionisis Koumouras

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.