Regarding the memory settings here: http://www.elasticsearch.org/guide/reference/setup/installation.html
Can anyone recommend the equivalent of ulimit -l for Solaris?
Is it necessary / possible to apply this setting to lock the memory in
When I execute the ulimit command it returns "unlimited", but I'm not
sure if that has anything to do with the "max locked memory" setting.
On Jun 18, 4:06 pm, davrob2 davirobe...@gmail.com wrote:
Actually, I did miss this big post on dealing with these issues:
So I will take this approach and see what happens, including the
Thanks for all your suggestions.
On Jun 17, 5:55 pm, David Williams williams.da...@gmail.com wrote:
when using mlockall, you should also used a fixed size heap (min and
max are equal) so it either allocates all of it, or won't start.
An alternative is to just run without swap. On a dedicated elastic
search server anything that gets moved to swap is a bad thing.
On Fri, Jun 17, 2011 at 8:07 AM, Colin Surprenant
afaik this should only happen at startup when ES tries to allocate its
On Fri, Jun 17, 2011 at 10:53 AM, davrob2 davirobe...@gmail.com wrote:
mlocakall seems quite dangerous:
Note, this is experimental feature, and might cause the JVM or
shell session to exit if failing to allocate the memory (because not
enough memory is available on the machine).
Can you tell me any more about what it does and how it works?
On Jun 17, 9:45 am, Enrique Medina Montenegro e.medin...@gmail.com
Use mlockall in your configuration to see whether it helps:
(see Memory Settings section)
On Fri, Jun 17, 2011 at 10:43 AM, davrob2 davirobe...@gmail.com wrote:
Yesterday my users were reporting intermittant poor response times for
searches, at the same time garbage collection seems to be running more
often than usual in the index (https://gist.github.com/1031079).
JVM params in elasticsearch.in.sh are ES_MAX_MEM=2g.
My first instinct was to increase the memory to 4g, any other tips on
what I should be doing to prevent this?