Planning capacity and ELK using filebeat/logstash or logstash/redis?

Hello everyone!

It's my first time here,

Reading logstash book I bought, it recommends using redis as a broker and several other sources recommend the same scenario in a production environment.

Searching I saw that a good solution was to use the filebeat to forward the logs, but the redis output is deprecated by searching here in the community are recommending use fb scenario -> ls -> s <-> Kibana, actually this is not for sure lose the logs?

Another question, I have no idea of the volume of logs generated per day in my environment, I believe that a 5GB per day, with 2 logstash with filters and 2 ElasticSearch, which would be the ideal hardware capability over a vmware environment?

I've several doubts to apply elk stack in a production enviroment :sweat_smile:, and excuse anything my english is not so good :disappointed_relieved:

Another question, I have no idea of the volume of logs generated per day in my environment, I believe that a 5GB per day, with 2 logstash with filters and 2 Elasticsearch, which would be the ideal hardware capability over a vmware environment?

A couple of points:

  • Don't cluster two ES machines. If you need more than one, go directly to three. The last one doesn't have to contain data. The reason is that you don't want two possible master nodes. Read more about split brain.
  • There is no ideal hardware configuration. It's always a balance between what performance you need and how much money you're willing to spend.
  • The hardware need depends on so many factors (including how long you're going to keep the logs) that there is no formula you can use.
  • Questions about ideal hardware configurations are very common here. Please explore the archives for pointers and discussions.