Bigdesk and top - ES memory usage output

Attached screenshot.png file shows the difference between memory usage
shown by Bigdesk and top.
The memory is set as ES_HEAP_SIZE=3g for ES.

  • Is the actual memory usage 'Used' under 'Heap Mem' on BigDesk?
  • I assume top is showing 3G because ES_HEAP_SIZE=3g in setting. But the
    server ES server uptime
    is 8 hours and up until an hour ago, 'top' showed 2.6G as ES memory
    usage. Why is this difference?
    When do we need to get concerned about low memory?
  • Logs show no outofmemory errors until now.

ES - 0.90.0
Bigdesk - master

Re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

UPDATE
*
*
The outputs are now as attached in screenshot. Please find.

ES_HEAP_SIZE=3g
ES mem as from top = 3.1G
ES mem as from BigDesk = 2.2g (Used) 2.9(Committed)

How to read the actual memory usage?

Re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,

You can see ES_HEAP_SIZE, the memory usage as reported by top and the
committed heap mem from BigDesk are approximately the same. I'm not 100%
sure, but the difference may come from:

  • the way they interpret how many M is a G: ES_HEAP_SIZE might consider
    1G=1000M, while for BigDesk 1G=1024M
  • the way they approximate
  • the fact that some memory is allocated by ES outside heap (which might
    explain top>BigDesk)

The "used" memory that you can see in BigDesk is how much of the heap is
actually used by ES. You can see that the value goes down every now and
then because the Garbage Collector kicks in to reclaim some memory. This
helps you know if the value of your ES_HEAP_SIZE is chosen well: you don't
want to allocate too much heap because that memory might be used by the OS
for caching, and you don't want to allocate too little heap because the
Garbage Collector will struggle to keep ES from running out of memory. From
your BigDesk screenshot, if this is how you normally run ES, it looks OK.

The OS has no idea about the amount of memory that ES actually uses ("used"
in BigDesk). ES runs in a Java Virtual Machine that allocates some memory
from the OS (the heap size) and it's up to ES and the JVM to manage that
memory, while the OS (see "top") treats the whole thing as a big lump of
allocated memory. It won't allocate any of that memory to other
applications, no matter if ES uses it or not.

Hope this helps.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Sat, May 4, 2013 at 9:13 AM, vims ksubins321@gmail.com wrote:

UPDATE
*
*
The outputs are now as attached in screenshot. Please find.

ES_HEAP_SIZE=3g
ES mem as from top = 3.1G
ES mem as from BigDesk = 2.2g (Used) 2.9(Committed)

How to read the actual memory usage?

Re,

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

  • the fact that some memory is allocated by ES outside heap (which
    might explain top>BigDesk)

I see this happening now. Please find the attached screenshot which is a
report after an 18 hour of
continuous indexing. ES now, as seen by OS, occupies 3.3G of memory. But
BigDesk still sees is as
2.9G(which is the original setting).

What I'm concerned is of ES progressively allocating more memory had
there been more to index.
This will deplete the server of memory and I'm guessing BigDesk will
still report the present
"Committed" figure. Why does ES continue allocating memory outside the
set Heap of 3g even when
there is sufficient memory for expansion: 2.9g(ES committed) - 1.3G(ES
used)= 1.6G.

Thanks Radu,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,

You can limit the size of direct memory ES uses by setting ES_DIRECT_SIZE
from elasticsearch.in.sh (/etc/default/elasticsearch if you installed from
the DEB package, /etc/sysconfig/elasticsearh if you installed from the RPM
package).

Are you using in-memory indices? If yes, then it's normal to see an
increased amount of direct memory being used as they grow.

If not, I'm not sure what other stuff ES uses direct memory for. Maybe
someone else can shed some light. I would assume it's for buffers, if you
use the default NIO FS as a
storehttp://www.elasticsearch.org/guide/reference/index-modules/store/for
your indices on disk. You can use mmapfs instead, by setting
"index.store.type" to "mmapfs" - which is recommended for 64-bit machines
anyway - in your configuration or in the index settings. I think a restart
of your node is required to take effect.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Sat, May 4, 2013 at 6:06 PM, Subin ksubins321@gmail.com wrote:

  • the fact that some memory is allocated by ES outside heap (which might

explain top>BigDesk)

I see this happening now. Please find the attached screenshot which is a
report after an 18 hour of
continuous indexing. ES now, as seen by OS, occupies 3.3G of memory. But
BigDesk still sees is as
2.9G(which is the original setting).

What I'm concerned is of ES progressively allocating more memory had there
been more to index.
This will deplete the server of memory and I'm guessing BigDesk will still
report the present
"Committed" figure. Why does ES continue allocating memory outside the set
Heap of 3g even when
there is sufficient memory for expansion: 2.9g(ES committed) - 1.3G(ES
used)= 1.6G.

Thanks Radu,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You can limit the size of direct memory ES uses by setting
ES_DIRECT_SIZE from elasticsearch.in.sh http://elasticsearch.in.sh
(/etc/default/elasticsearch if you installed from the DEB package,
/etc/sysconfig/elasticsearh if you installed from the RPM package).

I've set ES_DIRECT_SIZE = Heap Size = 3072M

Are you using in-memory indices? If yes, then it's normal to see an
increased amount of direct memory being used as they grow.

I haven't configured index storage in particular so I should be using
the default 'filesystem' storage.

If not, I'm not sure what other stuff ES uses direct memory for. Maybe
someone else can shed some light. I would assume it's for buffers, if
you use the default NIO FS as a store
http://www.elasticsearch.org/guide/reference/index-modules/store/
for your indices on disk. You can use mmapfs instead, by setting
"index.store.type" to "mmapfs" - which is recommended for 64-bit
machines anyway - in your configuration or in the index settings.

Set "index.store.type: mmapfs".

ES still continues to use up more memory. A new batch of test import is
in progress and 'top' reports
4.2G usage when the ES_HEAP_SIZE was set only to 3072M. Bigdesk again
reports Committed= 2.0G and
'Used' varies from 300M to 2.4G which is very well within the
ES_HEAP_SIZE. No errors in logs also.

I shall open a new thread in this regard.

Thanks Radu,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,

ES_DIRECT_SIZE will limit the amount of direct memory, which is the one
that's allocated outside heap. So if you set ES_HEAP_SIZE to 3g and
ES_DIRECT_SIZE to 2g, for example, you can expect ES to use up to 5GB.

Also, if you're worried about running out of memory, the best way to
confirm/throw away your worries is by running a performance test: throw
more requests at a test ES than you normally do in production and see what
happens. And I wouldn't trust on top for giving accurate memory usage. I
would trust "out of memory" exceptions in ES log, or slow indexing/search
performance though.

Best regards,
Radu

On Sun, May 5, 2013 at 5:50 PM, sub ksubins321@gmail.com wrote:

You can limit the size of direct memory ES uses by setting
ES_DIRECT_SIZE from elasticsearch.in.sh (/etc/default/elasticsearch if
you installed from the DEB package, /etc/sysconfig/elasticsearh if you
installed from the RPM package).

I've set ES_DIRECT_SIZE = Heap Size = 3072M

Are you using in-memory indices? If yes, then it's normal to see an
increased amount of direct memory being used as they grow.

I haven't configured index storage in particular so I should be using the
default 'filesystem' storage.

If not, I'm not sure what other stuff ES uses direct memory for. Maybe
someone else can shed some light. I would assume it's for buffers, if you
use the default NIO FS as a storehttp://www.elasticsearch.org/guide/reference/index-modules/store/for your indices on disk. You can use mmapfs instead, by setting
"index.store.type" to "mmapfs" - which is recommended for 64-bit machines
anyway - in your configuration or in the index settings.

Set "index.store.type: mmapfs".

ES still continues to use up more memory. A new batch of test import is in
progress and 'top' reports
4.2G usage when the ES_HEAP_SIZE was set only to 3072M. Bigdesk again
reports Committed= 2.0G and
'Used' varies from 300M to 2.4G which is very well within the
ES_HEAP_SIZE. No errors in logs also.

I shall open a new thread in this regard.

Thanks Radu,

--
http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ES_DIRECT_SIZE will limit the amount of direct memory, which is the
one that's allocated outside heap. So if you set ES_HEAP_SIZE to 3g
and ES_DIRECT_SIZE to 2g, for example, you can expect ES to use up to 5GB.

Thanks for the description.

Also, if you're worried about running out of memory, the best way to
confirm/throw away your worries is by running a performance test:
throw more requests at a test ES than you normally do in production
and see what happens. And I wouldn't trust on top for giving accurate
memory usage. I would trust "out of memory" exceptions in ES log, or
slow indexing/search performance though.

All the while I was running the test, BigDesk never showed 'Used' memory
coming anywhere near 'Committed'. No OutOfMemory errors popped during
the course too.
Yes I'm also starting to distrust 'top'. Into the final moments of
indexing, 'top' reported 5.1G usage for ES and 'free memory' of 4.5G on
my 8G machine.

Re,
Subin.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.