Elasticsearch performance tuning on elastic 1.7

(Guy93r) #1

Hi, Thanks in advance for any kind of help!
My environment has very restrictive security policy so the sending of any kind of logs and media is forbidden.

My environment is built with 3 master nodes, 3 data nodes and unused client node.
The master nodes are running great so I'll ignore them.
I have 3 200 GB ram machine, few HDD that form a lvm of 50 TB, rhel 6.4, 96 GB HEAP size, around 250 incides and growing daily. Marvel says that my search rate is regularly at 21/s. Indexing rate is at 2246/s. My HEAP usage is around 50-60%. CPU has no problem here. The cluster holds 25 TB of data with around 11 billion doc count. Every shard is configured to have 5 shards and 1 replica. Previously our environment had 32 GB of HEAP size. Our cluster had filled up to full HEAP usage and we had to add up to 96 GB for each node and restart our environment. Our configuration is basic configuration in the scope of memory configs, the only thing we changed was index_buffer_size at 30%. We want to tune our performance for maximum efficiency. How can we do that? We are debating for one strong instance(as we are memory bound) vs few instances on the same server and how to max the configuration. This entire cluster is on our production. Please advice. Thanks.

(Thomas Decaux) #2

96GB of heap? It's always better to scale horizontally by adding more nodes than vertically by adding more memory.

11 billion of documents, did you search on all documents ? Only 5 shards for everyones?

Do you use aggregation? this is very memory consumer.

(Guy93r) #3

Thanks for answering.

We know it's better to scale horizontally but due to our organization policy we can only get for our team physically big machines.
The other alternative is small virtual machines and the performance was really bad.

As I said, we have 250 indices and the documents are spread across those. Every index is 5 shards with 1 replica each. We don't have a lot of aggregations, mainly indexing and searching.


(system) #4