Memory problems

Just to let you know, Kimchy has tracked down and fixed a memory leak. You
can see the commit here:

If people can test this out and let us know if it works, that'd be great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.banon@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
...@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ...@elasticsearch.com)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here: http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the repository
you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies thepixeldeveloper@googlemail.comwrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak. You
can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub

If people can test this out and let us know if it works, that'd be great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.banon@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
...@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ...@elasticsearch.com)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar

Could this leak have affected the client as well?

Thanks,
Lar

On Jul 14, 9:49 am, Mathew Davies thepixeldevelo...@googlemail.com
wrote:

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here:http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the repository
you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]Memory leak · Issue #1075 · elastic/elasticsearch · GitHub...

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies thepixeldevelo...@googlemail.comwrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak. You
can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub...

If people can test this out and let us know if it works, that'd be great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.ba...@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
....@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ....@elasticsearch.com)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar- Hide quoted text -

  • Show quoted text -

Yes, since the scheduled jvm monitor logger that uses the offending java
call is also started when using a node client. Not so, btw, when using
transport client.

On Tue, Jul 19, 2011 at 2:25 AM, lmader lmaderintrepid@gmail.com wrote:

Could this leak have affected the client as well?

Thanks,
Lar

On Jul 14, 9:49 am, Mathew Davies thepixeldevelo...@googlemail.com
wrote:

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here:http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the
repository
you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]
Memory leak · Issue #1075 · elastic/elasticsearch · GitHub...

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies <thepixeldevelo...@googlemail.com
wrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak.
You
can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub.
..

If people can test this out and let us know if it works, that'd be
great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.ba...@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use
native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process
itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old
New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
....@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ....@
elasticsearch.com)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out
ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar- Hide quoted text -

  • Show quoted text -

Ok, that's us (we're using the node client). This sounds extremely
promising. We'll test out the latest release and let you know if this
solves it for us.

As always, thanks so much!
Lar

On Jul 18, 4:29 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Yes, since the scheduled jvm monitor logger that uses the offending java
call is also started when using a node client. Not so, btw, when using
transport client.

On Tue, Jul 19, 2011 at 2:25 AM, lmader lmaderintre...@gmail.com wrote:

Could this leak have affected the client as well?

Thanks,
Lar

On Jul 14, 9:49 am, Mathew Davies thepixeldevelo...@googlemail.com
wrote:

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here:http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the
repository
you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]
Memory leak · Issue #1075 · elastic/elasticsearch · GitHub...

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies <thepixeldevelo...@googlemail.com
wrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak.
You
can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub.
..

If people can test this out and let us know if it works, that'd be
great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.ba...@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use
native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process
itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old
New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
....@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ....@
elasticsearch.com)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out
ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar- Hide quoted text -

  • Show quoted text -- Hide quoted text -
  • Show quoted text -

So the fix really consist of doing the reflection magic once and for all,
instead of doing it at each call?
I just want to make sure, as I too use such technique in a frequently called
function.

--
Olivier Favre

www.yakaz.com

2011/7/14 Mathew Davies thepixeldeveloper@googlemail.com

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here: http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the
repository you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]
Memory leak · Issue #1075 · elastic/elasticsearch · GitHub

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies thepixeldeveloper@googlemail.comwrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak. You
can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub

If people can test this out and let us know if it works, that'd be great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.banon@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old New).
Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
...@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ...@elasticsearch.com
)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar

No, it has nothing to do with reflection. Its simply that calling
getLastGCInfo method on the GC MX bean leaks native memory every call.

On Tue, Jul 19, 2011 at 4:38 PM, Olivier Favre olivier@yakaz.com wrote:

So the fix really consist of doing the reflection magic once and for all,
instead of doing it at each call?
I just want to make sure, as I too use such technique in a frequently
called function.

--
Olivier Favre

www.yakaz.com

2011/7/14 Mathew Davies thepixeldeveloper@googlemail.com

20 hours have past and I can confirm that Shay has fixed the memory leak.

See my new munin graph here: http://i.imgur.com/MCGW0.png

If you can't upgrade your servers to use the latest code from the
repository you can also do what Vertice suggested [1] and pass the

Des.monitor.jvm.enabled=false

flag to the binary.

[1]
Memory leak · Issue #1075 · elastic/elasticsearch · GitHub

Kind Regards,
-Mathew Davies.

On 13 July 2011 17:14, Mathew Davies thepixeldeveloper@googlemail.comwrote:

Just to let you know, Kimchy has tracked down and fixed a memory leak.
You can see the commit here:

Native (java) process memory leak, closes #1118. · elastic/elasticsearch@1033249 · GitHub

If people can test this out and let us know if it works, that'd be great.

Kind Regards,
-Mathew Davies.

On 8 July 2011 01:59, Shay Banon shay.banon@elasticsearch.com wrote:

Its possible, I am trying to track this down and check.

On Friday, July 8, 2011 at 12:28 AM, lmader wrote:

We are not using sigar directly, unless ES does something with it. We
are running ES as a service.

Is it possible anything in the network layer or nio stuff could be
leaking?

Thanks so much,
Lar

On Jul 7, 7:20 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Thats a difficult one the figure out. elasticsearch does not use native
memory (by default), and this leak can come from allocations of native
memory, bugs in native libraries used, or leaks in the java process itself.
Are you using sigar within that app?

On Thursday, July 7, 2011 at 2:17 AM, lmader wrote:

I'm not sure I follow your suggestion: As I mentioned, the memory
leak is not in the JVM memory - not in the regular heaps. The JVM
monitor shows normal memory usage and garbage collection patterns.
However the java process memory grows until all system memory is gone.

On Jul 6, 3:26 pm, <jp.lora...@cfyar.com (http://cfyar.com)> wrote:

Try to find out which kind ofmemorythe heap is piling up (Perma Old
New). Tomcat is a notorious PermaGen leaker.

-------- Original Message --------
Subject: Re:Memoryproblems
From: lmader <lmaderintre...@gmail.com (mailto:lmaderintre
...@gmail.com)>
Date: Wed, July 06, 2011 4:52 pm
To: users <us...@elasticsearch.com (mailto:us ...@elasticsearch.com
)>
We are seeing the same issue. That is, running elasticsearch server
on linux with the sun jdk, the applicationmemoryseems constant, but
the swapmemorysteadily grows over the course of a week or so, until
we run out of swap space.
Additionally, I believe that our elastic client that runs in a Tomcat
webapp is leaking non-heapmemory. After a period of time, the
processmemory(but not the JVMmemory) starts to grow steadily until
we have to restart the tomcat server. We never get a java "out ofmemory"
error, instead the java process eventually consumes all of
systemmemory.
We have been doing our best to isolate this, and it really does seem
to be the elastic client that is leaking thememory. Perhaps the
sever side swapmemorygrowth is related.
I think we need help with this one.
Thanks,
Lar