I am replacing an existing enterprise logging system by Elastic Stack. I have hundreds of applications currently using the current system. Those applications they are spread across 3 or 4 servers (depending on the environment).
I am planning to use a filebeat forwarding the logs to two logstash instances, that will take care of parsing the logs (grok) and forwarding the log to Elasticsearch.
What would be the best practice in this case, one filebeat with multiple prospectors, or multiple filebeat with one prospector each?
In my experience, one FileBeat instance per server is more than sufficient. We are using FileBeat on our central logging server with multiple prospectors and we have touch an EPS of 20,000 from a single instance, even after multi line conversions. (It could go even higher, but Logstash cannot handle it on the current hardware! )
So, if you have a distributed environment, I don't see a need to have more than one FileBeat instance on a server. Multiple prospectors should be the way to go, if you want to capture logs from 20-30 paths. Should be done easily.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.