I've read a lot of different guides for how much memory to allocate to
ElasticSearch, and everything points to 50% of available system memory
(with the rest left over for caching and whatnot). A post to the group in
February
(https://groups.google.com/forum/?fromgroups#!topic/elasticsearch/wo5jU0-PQ3k)
mentions that under 31gb, the JVM can compress 64-bit pointers. But I'm
working on machines that have 148gb of memory. Is there any official
guidance as to how heap should be allocated on machines like this? Should I
opt to just start multiple nodes with <31gb Heap sizes to benefit from
pointer compression? Or should I still allocate 50% of system memory?
Also, looking at this blog
http://blog.sematext.com/2012/02/07/elasticsearch-poll/ kimchy mentions in
the comments that the reason for running multiple instances would be
smaller heaps. Since that's from a year ago, is that still the case?
On a somewhat related note, was the same_shard.host c.r.a setting removed?
I can't seem to set it via /_cluster/settings. I guess I could always just
set an awareness instead using an node parameter ...
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.