ES Heap and CPU Usage - bigger is better?

Hey guys,

i´l just want to ask here if anybody has some better experience with bigger
java heap sizes than me.

Our problem is that we only have 1 big server with 256 gb RAM and 64 for
our ES system which indexes about 4-6k events/s. Actually i have running 3
instances, 2 with 32 GB java heap and slow HDD´s as target for older
indices and 1 with 96GB java heap and fast SSD storage.

I think about splitting the 96GB instance to 2 X 32 GB RAM because our
server sometimes is running out of CPU and given to the ES documentation
heaps larger 32gb lead to uncompressed pointers and so far to much higher
cpu usage.

what make me worry is that instances which that little RAM arent able to
handle the indexing and search load(which is quite high).

Anyone with some experience to share?

Cheers

Stephen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fdffe493-961a-4158-9585-da324ede5ca6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I would say decrease your java heap RAM to lower pressure on GC and
switch store type from hybrid to mmapfs. This will enable lucene to use
your RAM to buffer index which will enhance your performance. Test with
these settings to see how the results look like. For us this has been
better than using larger heap with hybrid storage type.

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-store.html

Hey guys,

i´l just want to ask here if anybody has some better experience with
bigger java heap sizes than me.

Our problem is that we only have 1 big server with 256 gb RAM and 64 for
our ES system which indexes about 4-6k events/s. Actually i have running
3 instances, 2 with 32 GB java heap and slow HDD´s as target for older
indices and 1 with 96GB java heap and fast SSD storage.

I think about splitting the 96GB instance to 2 X 32 GB RAM because our
server sometimes is running out of CPU and given to the ES documentation
heaps larger 32gb lead to uncompressed pointers and so far to much
higher cpu usage.

what make me worry is that instances which that little RAM arent able to
handle the indexing and search load(which is quite high).

Anyone with some experience to share?

Cheers

Stephen

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com
mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/fdffe493-961a-4158-9585-da324ede5ca6%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/fdffe493-961a-4158-9585-da324ede5ca6%40googlegroups.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5473274A.7080604%40gmail.com.
For more options, visit https://groups.google.com/d/optout.