Memory issue in 0.15.1

Hi,

We're seeing a constant growth in memory usage when using 0.15.1. The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from
org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap
$Impl$Segment instances, pointing to byte arrays up to 33MB in size. A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per index.
The JMX console shows that the total allocations in each node is less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Hi,

out of curiosity, what is your Xmx limit? I am experiencing probably
something similar and even if ES node is without any activity (or very low
level activity, like running only Node stats API every few seconds) then the
heap space is being taken with slow but steady rate and when around 2/3 of
available mem is taken then GC kicks in and free some memory. But I do not
see any problem with that (may be you were talking about different thing but
the symptom sounds familiar but as I said it does not really harm Es node in
any way).

Regards,
Lukas

On Fri, Mar 4, 2011 at 9:05 AM, Steve stevejb71@gmail.com wrote:

Hi,

We're seeing a constant growth in memory usage when using 0.15.1. The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from
org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap
$Impl$Segment instances, pointing to byte arrays up to 33MB in size. A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per index.
The JMX console shows that the total allocations in each node is less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Hi,

-Xmx limit is 650mb. There is about 500mb in old gen, hitting gc in jconsole doesn't free it.
I did see it level off too, but we were quite concerned and downgraded to 0.13.1 as that had been running stable for a while, so I have no more information.

It seems a huge amount of memory for a quite small dataset.

Steve

On 4 Mar 2011, at 16:33, Lukáš Vlček lukas.vlcek@gmail.com wrote:

Hi,

out of curiosity, what is your Xmx limit? I am experiencing probably something similar and even if ES node is without any activity (or very low level activity, like running only Node stats API every few seconds) then the heap space is being taken with slow but steady rate and when around 2/3 of available mem is taken then GC kicks in and free some memory. But I do not see any problem with that (may be you were talking about different thing but the symptom sounds familiar but as I said it does not really harm Es node in any way).

Regards,
Lukas

On Fri, Mar 4, 2011 at 9:05 AM, Steve stevejb71@gmail.com wrote:
Hi,

We're seeing a constant growth in memory usage when using 0.15.1. The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from
org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap
$Impl$Segment instances, pointing to byte arrays up to 33MB in size. A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per index.
The JMX console shows that the total allocations in each node is less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Heya, I suspect that this is related to this: Issues · elastic/elasticsearch · GitHub. To make sure, I would be happy to download a heap dump if you have one handy and check what is taking the memory.

Lukas, the heap memory will increase in a JVM, simply by the mere fact that visualvm is connected to it (JMX is very noisy when it comes to object allocation). The main question is it gets released or not.
On Friday, March 4, 2011 at 12:58 PM, Steve Bastians wrote:

Hi,

-Xmx limit is 650mb. There is about 500mb in old gen, hitting gc in jconsole doesn't free it.
I did see it level off too, but we were quite concerned and downgraded to 0.13.1 as that had been running stable for a while, so I have no more information.

It seems a huge amount of memory for a quite small dataset.

Steve

On 4 Mar 2011, at 16:33, Lukáš Vlček lukas.vlcek@gmail.com wrote:

Hi,

out of curiosity, what is your Xmx limit? I am experiencing probably something similar and even if ES node is without any activity (or very low level activity, like running only Node stats API every few seconds) then the heap space is being taken with slow but steady rate and when around 2/3 of available mem is taken then GC kicks in and free some memory. But I do not see any problem with that (may be you were talking about different thing but the symptom sounds familiar but as I said it does not really harm Es node in any way).

Regards,
Lukas

On Fri, Mar 4, 2011 at 9:05 AM, Steve stevejb71@gmail.com wrote:

Hi,

We're seeing a constant growth in memory usage when using 0.15.1. The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from
org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap
$Impl$Segment instances, pointing to byte arrays up to 33MB in size. A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per index.
The JMX console shows that the total allocations in each node is less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Shay, I experience this when using admin REST API (cannot use JMX). But I do
not considered this a bug, just thought that ES creates some internal
objects that get GC'ed once VM wants to.

Lukas
Dne 5.3.2011 7:56 "Shay Banon" shay.banon@elasticsearch.com napsal(a):

Heya, I suspect that this is related to this:
Issues · elastic/elasticsearch · GitHub. To
make sure, I would be happy to download a heap dump if you have one handy
and check what is taking the memory.

Lukas, the heap memory will increase in a JVM, simply by the mere fact
that visualvm is connected to it (JMX is very noisy when it comes to object
allocation). The main question is it gets released or not.
On Friday, March 4, 2011 at 12:58 PM, Steve Bastians wrote:

Hi,

-Xmx limit is 650mb. There is about 500mb in old gen, hitting gc in
jconsole doesn't free it.
I did see it level off too, but we were quite concerned and downgraded to
0.13.1 as that had been running stable for a while, so I have no more
information.

It seems a huge amount of memory for a quite small dataset.

Steve

On 4 Mar 2011, at 16:33, Lukáš Vlček lukas.vlcek@gmail.com wrote:

Hi,

out of curiosity, what is your Xmx limit? I am experiencing probably
something similar and even if ES node is without any activity (or very low
level activity, like running only Node stats API every few seconds) then the
heap space is being taken with slow but steady rate and when around 2/3 of
available mem is taken then GC kicks in and free some memory. But I do not
see any problem with that (may be you were talking about different thing but
the symptom sounds familiar but as I said it does not really harm Es node in
any way).

Regards,
Lukas

On Fri, Mar 4, 2011 at 9:05 AM, Steve stevejb71@gmail.com wrote:

Hi,

We're seeing a constant growth in memory usage when using 0.15.1. The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from

org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap

$Impl$Segment instances, pointing to byte arrays up to 33MB in size.
A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less
than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per
index.
The JMX console shows that the total allocations in each node is less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Just to make it crystal-clear: memory gets released for me in this case.
Dne 5.3.2011 8:17 "Lukáš Vlček" lukas.vlcek@gmail.com napsal(a):

Shay, I experience this when using admin REST API (cannot use JMX). But I
do
not considered this a bug, just thought that ES creates some internal
objects that get GC'ed once VM wants to.

Lukas
Dne 5.3.2011 7:56 "Shay Banon" shay.banon@elasticsearch.com napsal(a):

Heya, I suspect that this is related to this:
Issues · elastic/elasticsearch · GitHub. To
make sure, I would be happy to download a heap dump if you have one handy
and check what is taking the memory.

Lukas, the heap memory will increase in a JVM, simply by the mere fact
that visualvm is connected to it (JMX is very noisy when it comes to
object
allocation). The main question is it gets released or not.
On Friday, March 4, 2011 at 12:58 PM, Steve Bastians wrote:

Hi,

-Xmx limit is 650mb. There is about 500mb in old gen, hitting gc in
jconsole doesn't free it.
I did see it level off too, but we were quite concerned and downgraded
to
0.13.1 as that had been running stable for a while, so I have no more
information.

It seems a huge amount of memory for a quite small dataset.

Steve

On 4 Mar 2011, at 16:33, Lukáš Vlček lukas.vlcek@gmail.com wrote:

Hi,

out of curiosity, what is your Xmx limit? I am experiencing probably
something similar and even if ES node is without any activity (or very low
level activity, like running only Node stats API every few seconds) then
the
heap space is being taken with slow but steady rate and when around 2/3 of
available mem is taken then GC kicks in and free some memory. But I do not
see any problem with that (may be you were talking about different thing
but
the symptom sounds familiar but as I said it does not really harm Es node
in
any way).

Regards,
Lukas

On Fri, Mar 4, 2011 at 9:05 AM, Steve stevejb71@gmail.com wrote:

Hi,

We're seeing a constant growth in memory usage when using 0.15.1.
The
heap grew at a steady rate from 350M to about 600M overnight before
leveling off. The number of new objects added was quite small, and
there was not much activity on our server at night.

I ran jhat and found large numbers of byte arrays reference from

org.elasticsearch.common.inject.internal.util.$CustomConcurrentHashMap

$Impl$Segment instances, pointing to byte arrays up to 33MB in size.
A
look at the jhat histogram showed about 500MB of byte arrays on the
heap. The total size of objects in our store is considerably less
than
that.

We're running 2 nodes, with 2 indices per node and 5 shards per
index.
The JMX console shows that the total allocations in each node is
less
than 60MB.

Can anyone help? Are there any known memory issues in 0.15.1?

Thanks,
Steve

Hi,

I'm using 0.15.2 and have reached a point where the app is out of Old space and not responding...

Have been running for some time:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14444 root 20 0 1566m 1.1g 2164 S 16.8 65.2 156:58.59 java

root@ec2-50-17-67-9:/var/log/flume# jstat -gcutil 14444 250 7
S0 S1 E O P YGC YGCT FGC FGCT GCT
85.86 0.00 100.00 99.20 55.28 9546 162.878 385 555.758 718.636
85.86 0.00 100.00 99.20 55.28 9546 162.878 385 555.758 718.636
85.86 16.88 100.00 99.30 55.28 9547 162.941 386 555.758 718.699
85.86 16.88 100.00 99.30 55.28 9547 162.941 386 555.758 718.699
85.86 16.88 100.00 99.30 55.28 9547 162.941 386 555.758 718.699
85.86 16.88 100.00 99.30 55.28 9547 162.941 386 555.758 718.699
85.86 16.88 100.00 99.30 55.28 9547 162.941 386 555.758 718.699

Is this normal, or a memory leak? Is there something I need to do to prevent this?

Thanks,

Below is a tail of the elasticsearch.log file:

[2011-06-02 00:06:20,676][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep] collection occurred, took [6.6s]
[2011-06-02 00:07:14,944][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9540] took [0s]/[2.6m], reclaimed [0b], leaving [1gb] used, max [1gb]
[2011-06-02 00:08:03,526][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9544] took [0s]/[2.6m], reclaimed [0b], leaving [1gb] used, max [1gb]
[2011-06-02 00:08:18,408][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9545] took [0s]/[2.6m], reclaimed [0b], leaving [1gb] used, max [1gb]
[2011-06-02 00:08:49,145][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9546] took [1.9s]/[2.7m], reclaimed [13mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:09:08,873][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9548] took [4ms]/[2.7m], reclaimed [12.8mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:09:40,324][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][240] took [14.4s]/[1.8m], reclaimed [-109960b], leaving [1gb] used, max [1gb]
[2011-06-02 00:09:56,613][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][241] took [14.7s]/[1.8m], reclaimed [98.6kb], leaving [1gb] used, max [1gb]
[2011-06-02 00:10:04,743][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9549] took [10ms]/[2.7m], reclaimed [12.4mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:10:13,863][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][242] took [14.7s]/[1.8m], reclaimed [4mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:10:30,123][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][243] took [14.4s]/[1.8m], reclaimed [-116184b], leaving [1gb] used, max [1gb]
[2011-06-02 00:10:46,414][DEBUG][monitor.jvm ] [Wisdom, Pete] [gc][Copy][9550] took [7ms]/[2.7m], reclaimed [12.8mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:10:47,422][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][244] took [14.8s]/[1.8m], reclaimed [2.8mb], leaving [1gb] used, max [1gb]
[2011-06-02 00:11:03,552][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][245] took [14.7s]/[1.8m], reclaimed [-92944b], leaving [1gb] used, max [1gb]
[2011-06-02 00:11:20,743][INFO ][monitor.jvm ] [Wisdom, Pete] [gc][ConcurrentMarkSweep][246] took [14.8s]/[1.8m], reclaimed [-5925720b], leaving [1gb] used, max [1gb]

Also the size of the data dir:

root@ec2-50-17-67-9:/mnt/storage/elasticsearch# du -h --max-depth=1 .
12K ./config
314M ./data
314M .