ES/Lucene eating up entire memory!

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I run
the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when I
do "top", the ES process shows the VIRT memory to be around 34g. That would
be I assume the max mapped memory. The %MEM though always hovers around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one guess
is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6eaac271-9b6b-4d6e-84b3-2c1194e0796b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Do you know what virtual memory is? You have terabytes of it.

On Sun, Mar 29, 2015 at 4:22 PM, Yogesh bindasyogesh@gmail.com wrote:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I run
the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when I
do "top", the ES process shows the VIRT memory to be around 34g. That would
be I assume the max mapped memory. The %MEM though always hovers around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one guess
is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6eaac271-9b6b-4d6e-84b3-2c1194e0796b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6eaac271-9b6b-4d6e-84b3-2c1194e0796b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAOdYfZWCV0c-bMsg1_x%3DtYdB9tYjXHwY%2ByuknxuWJGYP99_uTA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

You should
read: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I run
the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when I
do "top", the ES process shows the VIRT memory to be around 34g. That would
be I assume the max mapped memory. The %MEM though always hovers around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one guess
is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't indicate
RAM consumption.

What I am concerned about is the 3rd row which shows memory and indicates
that out of the total 50g, 43g is in use. Once this number crosses 45g, my
other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for this? I
haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler uwe.h.schindler@gmail.com
wrote:

You should read:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CADM0w%3Di5%2B_Cd-swP3f58jm-cE%2B7ULsq6QwfafQjsmka47h3fkg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

You need to read up a bit on how memory is allocated in Linux.

In an Elasticsearch or Database server, this seems to be both, you want
that free column to be 0. All available free memory should be used to
cache files. In your snapshot you have 35GB of file cache listed under the
cached heading. Memory listed under cached is essentially free memory that
is temporarily being used to cache files until it is otherwise requested.
This is how Linux makes efficient use of your memory, leveraging free
memory for file cache, but still having it available when you need it. As
such when determining if your box is out of memory you need to sum free and
cached.

This is precisely the reason that it is recommended that Elasticsearch only
be allocated 50% of the memory on the box for heap. In your case where you
have databases running, it should be 50% of the memory you have available
for Elasticsearch. For that matter you should apply the same basic rule(
50%) to your database unless it has specifically some other file caching
mechanism. For instance, you have 50GB of ram, assuming MySQL and
Elasticsearch, and you want to equally divide the ram, 25GB to each.
Elasticsearch then would be allowed to use 25GB, 12GB should be allocated
to heap, the balance left to the OS for file caching on behalf of
Elasticsearch. Assuming MySQL, with MyIsam, the same would be done 12GB to
MySQL, 12GB to the OS for file system caching of the MyISAM tables. Now if
you are using InnoDB things are different but that is way outside the scope
of this discussion.

So that you have 35GB of files being cached is a very good thing. It means
that you have a large amount of your data cached. It means you have ample
free memory, well beyond the 12GB a 50/50 split would demand. The 12GB of
free you have now probably came from the processes that you killed, I think
you meant this was Elasticsearch, though you were not specific.

The one concern I see looking at your top, is that you have a large swap,
and that some of it has been used. This is a sign that at some point you
had memory pressure, the only sign I see from your snapshot. That pressure
was not significant, but any swapping will destroy the performance of a
database, or Elasticsearch. In many cases people go to the extreme of
disabling swap entirely, as performance during swapping will be so poor,
that it will be unusable. Further by the time you were to even put a dent
in the size of that swap you will have wanted to reboot your box. My
approach is to keep a small swap available, so that I can see if the system
ever got to a point that it needed it, and to potentially buy a moment of
time.

If you are experiencing database slowdowns, this screenshot does not
illustrate that it is due to memory issues. Rather I would suspect disk IO
instead based on this information.

On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't indicate
RAM consumption.

What I am concerned about is the 3rd row which shows memory and indicates
that out of the total 50g, 43g is in use. Once this number crosses 45g, my
other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for this?
I haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler <uwe.h.s...@gmail.com
<javascript:>> wrote:

You should read:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Also, we don't recommend running ES alongside other apps. As you can see
contention is an issue and you will have to pay the price there.

On 1 April 2015 at 03:27, Aaron Mefford aaron@mefford.org wrote:

You need to read up a bit on how memory is allocated in Linux.

In an Elasticsearch or Database server, this seems to be both, you want
that free column to be 0. All available free memory should be used to
cache files. In your snapshot you have 35GB of file cache listed under the
cached heading. Memory listed under cached is essentially free memory that
is temporarily being used to cache files until it is otherwise requested.
This is how Linux makes efficient use of your memory, leveraging free
memory for file cache, but still having it available when you need it. As
such when determining if your box is out of memory you need to sum free and
cached.

This is precisely the reason that it is recommended that Elasticsearch
only be allocated 50% of the memory on the box for heap. In your case
where you have databases running, it should be 50% of the memory you have
available for Elasticsearch. For that matter you should apply the same
basic rule( 50%) to your database unless it has specifically some other
file caching mechanism. For instance, you have 50GB of ram, assuming MySQL
and Elasticsearch, and you want to equally divide the ram, 25GB to each.
Elasticsearch then would be allowed to use 25GB, 12GB should be allocated
to heap, the balance left to the OS for file caching on behalf of
Elasticsearch. Assuming MySQL, with MyIsam, the same would be done 12GB to
MySQL, 12GB to the OS for file system caching of the MyISAM tables. Now if
you are using InnoDB things are different but that is way outside the scope
of this discussion.

So that you have 35GB of files being cached is a very good thing. It
means that you have a large amount of your data cached. It means you have
ample free memory, well beyond the 12GB a 50/50 split would demand. The
12GB of free you have now probably came from the processes that you killed,
I think you meant this was Elasticsearch, though you were not specific.

The one concern I see looking at your top, is that you have a large swap,
and that some of it has been used. This is a sign that at some point you
had memory pressure, the only sign I see from your snapshot. That pressure
was not significant, but any swapping will destroy the performance of a
database, or Elasticsearch. In many cases people go to the extreme of
disabling swap entirely, as performance during swapping will be so poor,
that it will be unusable. Further by the time you were to even put a dent
in the size of that swap you will have wanted to reboot your box. My
approach is to keep a small swap available, so that I can see if the system
ever got to a point that it needed it, and to potentially buy a moment of
time.

If you are experiencing database slowdowns, this screenshot does not
illustrate that it is due to memory issues. Rather I would suspect disk IO
instead based on this information.

On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't
indicate RAM consumption.

What I am concerned about is the 3rd row which shows memory and indicates
that out of the total 50g, 43g is in use. Once this number crosses 45g, my
other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for this?
I haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler uwe.h.s...@gmail.com
wrote:

You should read: http://blog.thetaphi.de/2012/07/use-lucenes-
mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-9%3DPXrTjZz_AHL_1%2Bv65cy7YLPQ7A7WfZP%2B1swWNLFRQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Aaron. Your post was very informative.
Can you recommend any blogposts, articles etc. where I could read more on
this topic?

Thanks again for your help.

On Tuesday, March 31, 2015 at 9:57:58 PM UTC+5:30, Aaron Mefford wrote:

You need to read up a bit on how memory is allocated in Linux.

In an Elasticsearch or Database server, this seems to be both, you want
that free column to be 0. All available free memory should be used to
cache files. In your snapshot you have 35GB of file cache listed under the
cached heading. Memory listed under cached is essentially free memory that
is temporarily being used to cache files until it is otherwise requested.
This is how Linux makes efficient use of your memory, leveraging free
memory for file cache, but still having it available when you need it. As
such when determining if your box is out of memory you need to sum free and
cached.

This is precisely the reason that it is recommended that Elasticsearch
only be allocated 50% of the memory on the box for heap. In your case
where you have databases running, it should be 50% of the memory you have
available for Elasticsearch. For that matter you should apply the same
basic rule( 50%) to your database unless it has specifically some other
file caching mechanism. For instance, you have 50GB of ram, assuming MySQL
and Elasticsearch, and you want to equally divide the ram, 25GB to each.
Elasticsearch then would be allowed to use 25GB, 12GB should be allocated
to heap, the balance left to the OS for file caching on behalf of
Elasticsearch. Assuming MySQL, with MyIsam, the same would be done 12GB to
MySQL, 12GB to the OS for file system caching of the MyISAM tables. Now if
you are using InnoDB things are different but that is way outside the scope
of this discussion.

So that you have 35GB of files being cached is a very good thing. It
means that you have a large amount of your data cached. It means you have
ample free memory, well beyond the 12GB a 50/50 split would demand. The
12GB of free you have now probably came from the processes that you killed,
I think you meant this was Elasticsearch, though you were not specific.

The one concern I see looking at your top, is that you have a large swap,
and that some of it has been used. This is a sign that at some point you
had memory pressure, the only sign I see from your snapshot. That pressure
was not significant, but any swapping will destroy the performance of a
database, or Elasticsearch. In many cases people go to the extreme of
disabling swap entirely, as performance during swapping will be so poor,
that it will be unusable. Further by the time you were to even put a dent
in the size of that swap you will have wanted to reboot your box. My
approach is to keep a small swap available, so that I can see if the system
ever got to a point that it needed it, and to potentially buy a moment of
time.

If you are experiencing database slowdowns, this screenshot does not
illustrate that it is due to memory issues. Rather I would suspect disk IO
instead based on this information.

On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't
indicate RAM consumption.

What I am concerned about is the 3rd row which shows memory and indicates
that out of the total 50g, 43g is in use. Once this number crosses 45g, my
other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for this?
I haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler uwe.h.s...@gmail.com
wrote:

You should read:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going up.
From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ae78aa3c-d951-4af5-bf02-8630eaa92363%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Mark. I separated out the processes on different servers, so now ES
has a full server.
I made the changes yesterday and it seems to be stable since then.

On Wednesday, April 1, 2015 at 2:07:41 AM UTC+5:30, Mark Walkom wrote:

Also, we don't recommend running ES alongside other apps. As you can see
contention is an issue and you will have to pay the price there.

On 1 April 2015 at 03:27, Aaron Mefford <aa...@mefford.org <javascript:>>
wrote:

You need to read up a bit on how memory is allocated in Linux.

In an Elasticsearch or Database server, this seems to be both, you want
that free column to be 0. All available free memory should be used to
cache files. In your snapshot you have 35GB of file cache listed under the
cached heading. Memory listed under cached is essentially free memory that
is temporarily being used to cache files until it is otherwise requested.
This is how Linux makes efficient use of your memory, leveraging free
memory for file cache, but still having it available when you need it. As
such when determining if your box is out of memory you need to sum free and
cached.

This is precisely the reason that it is recommended that Elasticsearch
only be allocated 50% of the memory on the box for heap. In your case
where you have databases running, it should be 50% of the memory you have
available for Elasticsearch. For that matter you should apply the same
basic rule( 50%) to your database unless it has specifically some other
file caching mechanism. For instance, you have 50GB of ram, assuming MySQL
and Elasticsearch, and you want to equally divide the ram, 25GB to each.
Elasticsearch then would be allowed to use 25GB, 12GB should be allocated
to heap, the balance left to the OS for file caching on behalf of
Elasticsearch. Assuming MySQL, with MyIsam, the same would be done 12GB to
MySQL, 12GB to the OS for file system caching of the MyISAM tables. Now if
you are using InnoDB things are different but that is way outside the scope
of this discussion.

So that you have 35GB of files being cached is a very good thing. It
means that you have a large amount of your data cached. It means you have
ample free memory, well beyond the 12GB a 50/50 split would demand. The
12GB of free you have now probably came from the processes that you killed,
I think you meant this was Elasticsearch, though you were not specific.

The one concern I see looking at your top, is that you have a large swap,
and that some of it has been used. This is a sign that at some point you
had memory pressure, the only sign I see from your snapshot. That pressure
was not significant, but any swapping will destroy the performance of a
database, or Elasticsearch. In many cases people go to the extreme of
disabling swap entirely, as performance during swapping will be so poor,
that it will be unusable. Further by the time you were to even put a dent
in the size of that swap you will have wanted to reboot your box. My
approach is to keep a small swap available, so that I can see if the system
ever got to a point that it needed it, and to potentially buy a moment of
time.

If you are experiencing database slowdowns, this screenshot does not
illustrate that it is due to memory issues. Rather I would suspect disk IO
instead based on this information.

On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't
indicate RAM consumption.

What I am concerned about is the 3rd row which shows memory and
indicates that out of the total 50g, 43g is in use. Once this number
crosses 45g, my other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for
this? I haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler uwe.h.s...@gmail.com
wrote:

You should read: http://blog.thetaphi.de/2012/07/use-lucenes-
mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going
up. From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4df4d368-1113-4718-9a14-3bf5abd10ea0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

First of all the what, not so much the why:

In order of depth, first one provides light coverage of the topic:

http://linux-mm.org/Low_On_Memory
http://www.linuxhowtos.org/System/Linux%20Memory%20Management.htm
http://www.tldp.org/LDP/tlk/mm/memory.html

This is also pretty good but deals more with memory management in ES not
out.
http://jprante.github.io/2012/11/28/Elasticsearch-Java-Virtual-Machine-settings-explained.html

On Friday, April 3, 2015 at 10:52:57 AM UTC-6, Yogesh wrote:

Thanks Aaron. Your post was very informative.
Can you recommend any blogposts, articles etc. where I could read more on
this topic?

Thanks again for your help.

On Tuesday, March 31, 2015 at 9:57:58 PM UTC+5:30, Aaron Mefford wrote:

You need to read up a bit on how memory is allocated in Linux.

In an Elasticsearch or Database server, this seems to be both, you want
that free column to be 0. All available free memory should be used to
cache files. In your snapshot you have 35GB of file cache listed under the
cached heading. Memory listed under cached is essentially free memory that
is temporarily being used to cache files until it is otherwise requested.
This is how Linux makes efficient use of your memory, leveraging free
memory for file cache, but still having it available when you need it. As
such when determining if your box is out of memory you need to sum free and
cached.

This is precisely the reason that it is recommended that Elasticsearch
only be allocated 50% of the memory on the box for heap. In your case
where you have databases running, it should be 50% of the memory you have
available for Elasticsearch. For that matter you should apply the same
basic rule( 50%) to your database unless it has specifically some other
file caching mechanism. For instance, you have 50GB of ram, assuming MySQL
and Elasticsearch, and you want to equally divide the ram, 25GB to each.
Elasticsearch then would be allowed to use 25GB, 12GB should be allocated
to heap, the balance left to the OS for file caching on behalf of
Elasticsearch. Assuming MySQL, with MyIsam, the same would be done 12GB to
MySQL, 12GB to the OS for file system caching of the MyISAM tables. Now if
you are using InnoDB things are different but that is way outside the scope
of this discussion.

So that you have 35GB of files being cached is a very good thing. It
means that you have a large amount of your data cached. It means you have
ample free memory, well beyond the 12GB a 50/50 split would demand. The
12GB of free you have now probably came from the processes that you killed,
I think you meant this was Elasticsearch, though you were not specific.

The one concern I see looking at your top, is that you have a large swap,
and that some of it has been used. This is a sign that at some point you
had memory pressure, the only sign I see from your snapshot. That pressure
was not significant, but any swapping will destroy the performance of a
database, or Elasticsearch. In many cases people go to the extreme of
disabling swap entirely, as performance during swapping will be so poor,
that it will be unusable. Further by the time you were to even put a dent
in the size of that swap you will have wanted to reboot your box. My
approach is to keep a small swap available, so that I can see if the system
ever got to a point that it needed it, and to potentially buy a moment of
time.

If you are experiencing database slowdowns, this screenshot does not
illustrate that it is due to memory issues. Rather I would suspect disk IO
instead based on this information.

On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:

Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't
indicate RAM consumption.

What I am concerned about is the 3rd row which shows memory and
indicates that out of the total 50g, 43g is in use. Once this number
crosses 45g, my other databases start behaving badly.

Problem is, even after I kill all the processes, this doesn't go down.
(Attaching snapshot of top after killing all processes). Right now what I
do is reboot the system every three days which is the time it takes to
gradually fill the memory with something (I have no clue what that is).

Though I think the max file descriptors wouldn't be the culprit for
this? I haven't changed that yet.

On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler uwe.h.s...@gmail.com
wrote:

You should read:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Maybe this allows you to figure out what's going on! VIRT means nothing
about consumption, you should look at RES.

Thanks,
Uwe

Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:

Hi,

I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I
run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, when
I do "top", the ES process shows the VIRT memory to be around 34g. That
would be I assume the max mapped memory. The %MEM though always hovers
around 10%

However, within a few days post-reboot, the memory used keeps going
up. From 10g to almost 50g (as shown in the third line) because of which my
other dbs start behaving badly. Below is the snapshot of "top". Despite the
fact that VIRT and %MEM still hover around the same 34g and 10%
respectively.

Please help me understand where is my memory going over time! My one
guess is that Lucene is eating it up. How do I remedy it?

Thanks-in-advance!

https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/kTDNDJwxOzA/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9f5160f2-994e-4f08-99bf-0f0af181916e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.