We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
There may not be much you can do here beyond giving the system more RAM and
moving to SSD.
How bad is the problem?
On 13 March 2015 at 17:03, Lindsey Poole lpoole@gmail.com wrote:
Hey guys,
We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
You may try limit direct memory on JVM level by
using -XX:MaxDirectMemorySize (default is unlimited). See also
ES_DIRECT_SIZE in
I recommend at least 2GB
Jörg
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole lpoole@gmail.com wrote:
Hey guys,
We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole <lpo...@gmail.com
<javascript:>> wrote:
Hey guys,
We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole lpo...@gmail.com wrote:
Hey guys,
We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
I'm out - no experience with EC2. I avoid foreign servers at all cost.
Maybe 120G RAM is affected by swap/memory overcommit. Do not forget to
check memlock and memory ballooning. The chances are few you can control
host settings as a guest in a virtual server environment.
Jörg
On Sat, Mar 14, 2015 at 5:06 PM, Lindsey Poole lpoole@gmail.com wrote:
btw - we're on EC2 I2-4xl hosts, so we have ~120g ram and SSDs.
On Saturday, March 14, 2015 at 9:04:34 AM UTC-7, Lindsey Poole wrote:
I did see the ES_DIRECT_SIZE, but it seems to be ineffective.
I will try setting -XX:MaxDirectMemorySize directly.
On Saturday, March 14, 2015 at 4:43:22 AM UTC-7, Jörg Prante wrote:
You may try limit direct memory on JVM level by
using -XX:MaxDirectMemorySize (default is unlimited). See also
ES_DIRECT_SIZE in http://www.elastic.co/guide/en/elasticsearch/
reference/current/setup-service.html#_linux
I recommend at least 2GB
Jörg
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole lpo...@gmail.com wrote:
Hey guys,
We're running into some problems under heavy write, nominal read volume
when the Lucene memory mapped files have exhausted available physical
memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
I'm out - no experience with EC2. I avoid foreign servers at all cost.
Maybe 120G RAM is affected by swap/memory overcommit. Do not forget to
check memlock and memory ballooning. The chances are few you can control
host settings as a guest in a virtual server environment.
Jörg
On Sat, Mar 14, 2015 at 5:06 PM, Lindsey Poole lpoole@gmail.com wrote:
btw - we're on EC2 I2-4xl hosts, so we have ~120g ram and SSDs.
On Saturday, March 14, 2015 at 9:04:34 AM UTC-7, Lindsey Poole wrote:
I did see the ES_DIRECT_SIZE, but it seems to be ineffective.
I will try setting -XX:MaxDirectMemorySize directly.
On Saturday, March 14, 2015 at 4:43:22 AM UTC-7, Jörg Prante wrote:
You may try limit direct memory on JVM level by
using -XX:MaxDirectMemorySize (default is unlimited). See also
ES_DIRECT_SIZE in http://www.elastic.co/guide/en/elasticsearch/
reference/current/setup-service.html#_linux
I recommend at least 2GB
Jörg
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole lpo...@gmail.com
wrote:
Hey guys,
We're running into some problems under heavy write, nominal read
volume when the Lucene memory mapped files have exhausted available
physical memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory is
available to MemoryMapDirectory?
Just to close this out - we disabled EC2's health checks and spent some
time tuning the batch thread-pool size to prevent overrunning the cluster
once the memory map cache size exceeds available physical memory. This was
successful (we're restricted to a surprisingly small threadpool size of 3).
Thanks.
On Saturday, March 14, 2015 at 1:05:49 PM UTC-7, Mark Walkom wrote:
Can you provide more info on what the error/problem is, logs might help.
I'm out - no experience with EC2. I avoid foreign servers at all cost.
Maybe 120G RAM is affected by swap/memory overcommit. Do not forget to
check memlock and memory ballooning. The chances are few you can control
host settings as a guest in a virtual server environment.
Jörg
On Sat, Mar 14, 2015 at 5:06 PM, Lindsey Poole <lpo...@gmail.com
<javascript:>> wrote:
btw - we're on EC2 I2-4xl hosts, so we have ~120g ram and SSDs.
On Saturday, March 14, 2015 at 9:04:34 AM UTC-7, Lindsey Poole wrote:
I did see the ES_DIRECT_SIZE, but it seems to be ineffective.
I will try setting -XX:MaxDirectMemorySize directly.
On Saturday, March 14, 2015 at 4:43:22 AM UTC-7, Jörg Prante wrote:
You may try limit direct memory on JVM level by
using -XX:MaxDirectMemorySize (default is unlimited). See also
ES_DIRECT_SIZE in http://www.elastic.co/guide/en/elasticsearch/
reference/current/setup-service.html#_linux
I recommend at least 2GB
Jörg
On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole lpo...@gmail.com
wrote:
Hey guys,
We're running into some problems under heavy write, nominal read
volume when the Lucene memory mapped files have exhausted available
physical memory, and segments from disk must be paged into memory.
Are there any configs available to control how much physical memory
is available to MemoryMapDirectory?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.