Hi Shay,
Why do you recommend only using half? I learned from somewhere, I can't
remember where, that 3/4 was safe, even allowing for the OS processes and
file system caching. Have you run tests regarding this or heard of someone
who has?
Thanks,
Mark
On Monday, March 5, 2012 10:33:21 PM UTC-8, kimchy wrote:
Don't use 25gb on a machine that only has 20gb of RAM. It should always
be lower (since swapping does not work well for GC based systems).On Tuesday, March 6, 2012 at 7:44 AM, Radu Gheorghe wrote:
Thanks, Shay!
I will try with those settings. My nodes have 8 cores and 20GB of RAM.
I've started a test with 15 to 25GB min and max sizes (don't have the
results yet). I will try again with ES_HEAP_SIZE=10GB and see what the
results are.On 5 mar., 17:21, Shay Banon kim...@gmail.com wrote:
Yea, so I suggest setting the memory settings for ES. In 0.19, there is a
simple env var that you can set called ES_HEAP_SIZE (see more here:
Elasticsearch Platform — Find real-time answers at scale | Elastic,
which sets both the min and max to the same value). I recommend you set it
to about half the memory you have on the machines (can you share some
details on those?).On Monday, March 5, 2012 at 9:27 AM, Radu Gheorghe wrote:
- Did you see any OOM (Out of Memory) failures in the logs? Can you use
big desk to see how memory is used by the instances?I've checked the logs and I've checked big desk and I haven't seen any
relevant symptom. But it might have been me failing here. I will
double-check the logs tomorrow for OOM errors and report back.I've double-checked the logs and I actually got quite a lot of these:
java.lang.OutOfMemoryError: Java heap space
I can't believe I missed them in the first place
So, I will try again with a bigger ES_MAX_MEM in bin/
elasticsearch.in.sh (http://elasticsearch.in.sh). Would it hurt if I set
it to something huge like
"20g"?