ELK, DImensioning trouble and doubts

We need to know more about how dimensioning the ELK cluster because the performance is too badly, make a search in kibana is not possible,very slowly any consult result in timeout also JVM logs indicate OutOfMemory.. , in this moment we have a production cluster with next conditions:

3 Physical hosts to ES 12 Gb RAM and / or, 5 core CPU and / or, 2.5TB e/o
3 Master Nodes 5gb e/o
3 Slaves Nodes 5gb e/o
Heap Size to master nodes: 2gb each
Heap Size slave nodes: 2gb each
Kibana and Logtash independient hosts
1 cordinator node ES running in Kibana and Logtash server
We have a 5 shards and 1 replica
Documents by day: 2,700,000
Space in disk by day: 37gb
Weigth of each document: 14kb

We are planning a physical upgrade and redimensioning cluster ELK to get better performance in cluster, we are thinking getting up RAM Slave Nodes to 14gb RAM and 7gb Heapsize... We don have any idea about how much shards and replica we need with that amount of data..

Thanks in Advance, If Needs another info just tell us...

I would recommend that read this blog post for some practical guidelines on sharding. Then install montering so you can see what is limiting performance and causing issues. You may also benefit from this talk and this guide.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.