Hi all,
I apologise if this question is naive, but finding a definitive answer has absorbed too much time alread so I'm putting it to the community
I am building a SIEM system based around the ELK stack. The system accepts enormous volumes of data, 30-50Gb per day for starters, and is expected to be able to index and search as much of that data as possible and as fast as possible.
The current design calls for boxes with large amounts of RAM, in order to avoid 'courier fetch failure' errors when the indexes under search grow too large but, I have a very simple question:
As the index size increases to very large sizes (Terabytes), which component requires more RAM to return searches? ElasticSearch or Kibana?
When a courier fetch failed error is seen, which component has run out of memory? Is it the ElasticSearch nodes beneath, or is it the Kibana search engine itself?
The current design calls for almost a Terabyte of RAM split across three machines; one Kibana, two ElasticSearch. Each ElasticSearch box will run multiple instances, each with 32Gb of RAM each.
Will I be able to search a larger index by applying more RAM to the Kibana search box (I can easily give it 768Gb) or by adding more and more ElasticSearch nodes?
Again, apologies if this is well known around here