Constantly increasing memory outside of Java heap

Hey,

I've run into an issue which is preventing me from moving forwards with ES.
I've got an application where I keep 'live' documents in ElasticSearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and whether
there are ways to debug memory usage which is unaccounted for by processes
in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/68ac8858-9074-43f1-9ad4-666de8cba344%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I forgot to mention, I'm running Elasticsearch 1.0.1 on Ubuntu 12.04 with
24GB of available RAM.

On Thursday, March 13, 2014 5:07:13 PM UTC-7, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5616eb61-5199-4a6c-a257-18b31c582d83%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

How much heap, what java version, how big are your indexes?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 14 March 2014 11:11, Jos Kraaijeveld mail@kaidence.org wrote:

I forgot to mention, I'm running Elasticsearch 1.0.1 on Ubuntu 12.04 with
24GB of available RAM.

On Thursday, March 13, 2014 5:07:13 PM UTC-7, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5616eb61-5199-4a6c-a257-18b31c582d83%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/5616eb61-5199-4a6c-a257-18b31c582d83%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624ZstrfY8rvq1F-WqLZ6zWzLCU4Ubv6zB%2BScxJRUDeVw1g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I believe you are just witnessing the OS caching files in memory. Lucene
(and therefore by extension Elasticsearch) uses a large number of files to
represent segments. TTL + updates will cause even higher file turnover
than usual.

The OS manages all of this caching and will reclaim it for other processes
when needed. Are you experiencing problems, or just witnessing memory
usage? I wouldn't be concerned unless there is an actual problem that you
are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/29a04d80-8cee-4775-b2b7-fb0abb7e865c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run out.
We use Ganglia to monitor our systems and that represents the memory as
'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory. Lucene
(and therefore by extension Elasticsearch) uses a large number of files to
represent segments. TTL + updates will cause even higher file turnover
than usual.

The OS manages all of this caching and will reclaim it for other processes
when needed. Are you experiencing problems, or just witnessing memory
usage? I wouldn't be concerned unless there is an actual problem that you
are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bb012b9b-0ffa-4e37-9ef8-284049f644ae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run out.
We use Ganglia to monitor our systems and that represents the memory as
'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory. Lucene
(and therefore by extension Elasticsearch) uses a large number of files to
represent segments. TTL + updates will cause even higher file turnover
than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the box
creeps up, seemingly unbounded until there is no more resident memory left.
The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it
seems the mapping from storage on disk to memory is every-increasing, even
when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a492cb9a-abeb-4b0e-ae97-5db7df843565%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Also, are there other processes running which may be causing the problem?
Does the behavior only happen when ES is running?

On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run out.
We use Ganglia to monitor our systems and that represents the memory as
'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory.
Lucene (and therefore by extension Elasticsearch) uses a large number of
files to represent segments. TTL + updates will cause even higher file
turnover than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards with
ES. I've got an application where I keep 'live' documents in Elasticsearch.
Each document is a combination from data from multiple sources, which are
merged together using doc_as_upsert. Each document has a TTL which is
updated whenever new data comes in for a document, so documents die
whenever no data source has given information about it for a while. The
amount of documents generally doesn't exceed 15.000 so it's a fairly small
data set.

Whenever I leave this running, slowly but surely memory usage on the
box creeps up, seemingly unbounded until there is no more resident memory
left. The Java process nicely keeps within its set ES_MAX_HEAP bounds, but
it seems the mapping from storage on disk to memory is every-increasing,
even when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2efc954a-9b20-4aca-9bfc-59d012da719d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There are no other processes running except for ES and the program which
posts the updates. The memory is constantly increasing when the updater is
running, but is stale (and doesn't release the memory at all, no matter how
much is used) whenever ES is idle.

On Thursday, March 13, 2014 5:32:43 PM UTC-7, Zachary Tong wrote:

Also, are there other processes running which may be causing the problem?
Does the behavior only happen when ES is running?

On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run
out. We use Ganglia to monitor our systems and that represents the memory
as 'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory.
Lucene (and therefore by extension Elasticsearch) uses a large number of
files to represent segments. TTL + updates will cause even higher file
turnover than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards
with ES. I've got an application where I keep 'live' documents in
Elasticsearch. Each document is a combination from data from multiple
sources, which are merged together using doc_as_upsert. Each document has a
TTL which is updated whenever new data comes in for a document, so
documents die whenever no data source has given information about it for a
while. The amount of documents generally doesn't exceed 15.000 so it's a
fairly small data set.

Whenever I leave this running, slowly but surely memory usage on the
box creeps up, seemingly unbounded until there is no more resident memory
left. The Java process nicely keeps within its set ES_MAX_HEAP bounds, but
it seems the mapping from storage on disk to memory is every-increasing,
even when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/497bffad-b26f-438e-b603-e7a4a3b90adf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

As a follow-up, when the server is nearing maximum memory, the memory use
stops increasing. This would indeed support Zachary's caching theory,
although I'm still confused as to why it shows up as 'in use' memory rather
than 'cached' memory. In any case, it does not block me right now. It's
just peculiar, and I'll revive this thread once I have a better explanation.

On Thursday, March 13, 2014 5:35:17 PM UTC-7, Jos Kraaijeveld wrote:

There are no other processes running except for ES and the program which
posts the updates. The memory is constantly increasing when the updater is
running, but is stale (and doesn't release the memory at all, no matter how
much is used) whenever ES is idle.

On Thursday, March 13, 2014 5:32:43 PM UTC-7, Zachary Tong wrote:

Also, are there other processes running which may be causing the problem?
Does the behavior only happen when ES is running?

On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run
out. We use Ganglia to monitor our systems and that represents the memory
as 'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory.
Lucene (and therefore by extension Elasticsearch) uses a large number of
files to represent segments. TTL + updates will cause even higher file
turnover than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards
with ES. I've got an application where I keep 'live' documents in
Elasticsearch. Each document is a combination from data from multiple
sources, which are merged together using doc_as_upsert. Each document has a
TTL which is updated whenever new data comes in for a document, so
documents die whenever no data source has given information about it for a
while. The amount of documents generally doesn't exceed 15.000 so it's a
fairly small data set.

Whenever I leave this running, slowly but surely memory usage on the
box creeps up, seemingly unbounded until there is no more resident memory
left. The Java process nicely keeps within its set ES_MAX_HEAP bounds, but
it seems the mapping from storage on disk to memory is every-increasing,
even when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c2eb4994-d2dc-468b-980b-00f2f6825141%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

I have experienced same behavior when I have tried to load large amount of
data... If you clear the file system cache (herehttp://www.delphitools.info/2013/11/29/flush-windows-file-cache/is a link to a tool), the memory drops to the defined heap size.
However this is still looks as a wrong behavior, is there a way to block
the shareable memory upfront ?

All the best,
Yitzhak

On Tuesday, March 18, 2014 12:34:33 AM UTC+2, Jos Kraaijeveld wrote:

As a follow-up, when the server is nearing maximum memory, the memory use
stops increasing. This would indeed support Zachary's caching theory,
although I'm still confused as to why it shows up as 'in use' memory rather
than 'cached' memory. In any case, it does not block me right now. It's
just peculiar, and I'll revive this thread once I have a better explanation.

On Thursday, March 13, 2014 5:35:17 PM UTC-7, Jos Kraaijeveld wrote:

There are no other processes running except for ES and the program which
posts the updates. The memory is constantly increasing when the updater is
running, but is stale (and doesn't release the memory at all, no matter how
much is used) whenever ES is idle.

On Thursday, March 13, 2014 5:32:43 PM UTC-7, Zachary Tong wrote:

Also, are there other processes running which may be causing the
problem? Does the behavior only happen when ES is running?

On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run
out. We use Ganglia to monitor our systems and that represents the memory
as 'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory.
Lucene (and therefore by extension Elasticsearch) uses a large number of
files to represent segments. TTL + updates will cause even higher file
turnover than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards
with ES. I've got an application where I keep 'live' documents in
Elasticsearch. Each document is a combination from data from multiple
sources, which are merged together using doc_as_upsert. Each document has a
TTL which is updated whenever new data comes in for a document, so
documents die whenever no data source has given information about it for a
while. The amount of documents generally doesn't exceed 15.000 so it's a
fairly small data set.

Whenever I leave this running, slowly but surely memory usage on the
box creeps up, seemingly unbounded until there is no more resident memory
left. The Java process nicely keeps within its set ES_MAX_HEAP bounds, but
it seems the mapping from storage on disk to memory is every-increasing,
even when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/82a08e2d-380f-4a03-bff5-0255a93c637d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You can limit the off-heap space used by setting ES_DIRECT_SIZE.

--
Ivan

On Tue, Apr 8, 2014 at 1:31 PM, Yitzhak Kesselman ikesselman@gmail.comwrote:

Hi,

I have experienced same behavior when I have tried to load large amount of
data... If you clear the file system cache (herehttp://www.delphitools.info/2013/11/29/flush-windows-file-cache/is a link to a tool), the memory drops to the defined heap size.
However this is still looks as a wrong behavior, is there a way to block
the shareable memory upfront ?

All the best,
Yitzhak

On Tuesday, March 18, 2014 12:34:33 AM UTC+2, Jos Kraaijeveld wrote:

As a follow-up, when the server is nearing maximum memory, the memory use
stops increasing. This would indeed support Zachary's caching theory,
although I'm still confused as to why it shows up as 'in use' memory rather
than 'cached' memory. In any case, it does not block me right now. It's
just peculiar, and I'll revive this thread once I have a better explanation.

On Thursday, March 13, 2014 5:35:17 PM UTC-7, Jos Kraaijeveld wrote:

There are no other processes running except for ES and the program which
posts the updates. The memory is constantly increasing when the updater is
running, but is stale (and doesn't release the memory at all, no matter how
much is used) whenever ES is idle.

On Thursday, March 13, 2014 5:32:43 PM UTC-7, Zachary Tong wrote:

Also, are there other processes running which may be causing the
problem? Does the behavior only happen when ES is running?

On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:

Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general consideration anyway.

-Z

On Thursday, March 13, 2014 8:23:34 PM UTC-4, Jos Kraaijeveld wrote:

@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}

@Zachary Our systems are set up to alert when memory is about to run
out. We use Ganglia to monitor our systems and that represents the memory
as 'used', rather than 'cached'. I will try to just let it run until memory
runs out and report back after that though.

On Thursday, March 13, 2014 5:17:20 PM UTC-7, Zachary Tong wrote:

I believe you are just witnessing the OS caching files in memory.
Lucene (and therefore by extension Elasticsearch) uses a large number of
files to represent segments. TTL + updates will cause even higher file
turnover than usual.

The OS manages all of this caching and will reclaim it for other
processes when needed. Are you experiencing problems, or just witnessing
memory usage? I wouldn't be concerned unless there is an actual problem
that you are seeing.

On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:

Hey,

I've run into an issue which is preventing me from moving forwards
with ES. I've got an application where I keep 'live' documents in
Elasticsearch. Each document is a combination from data from multiple
sources, which are merged together using doc_as_upsert. Each document has a
TTL which is updated whenever new data comes in for a document, so
documents die whenever no data source has given information about it for a
while. The amount of documents generally doesn't exceed 15.000 so it's a
fairly small data set.

Whenever I leave this running, slowly but surely memory usage on
the box creeps up, seemingly unbounded until there is no more resident
memory left. The Java process nicely keeps within its set ES_MAX_HEAP
bounds, but it seems the mapping from storage on disk to memory is
every-increasing, even when the amount of 'live' documents goes to 0.

I was wondering if anyone has seen such a memory problem before and
whether there are ways to debug memory usage which is unaccounted for by
processes in 'top'.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/82a08e2d-380f-4a03-bff5-0255a93c637d%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/82a08e2d-380f-4a03-bff5-0255a93c637d%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQD8WB4Rwb5KBHVKKJytSBbCGMz_4JLEDNfk%3DQG4b5yApQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.