I'm currently in the middle of setting up a LogStash stack for our company.
We have multiple Ruby on Rails applications running, representing various environments such as our Production, staging etc.
At the moment I have one VM running Kibana, and one VM running ElasticSearch, both of them hooked up together so they can communicate.
Now my question is, what would be the recommended approach for getting the log files into ElasticSearch?
One centralized LogStash instance on the Kibana Box, and all environments forward their logfiles to it using FileBeat?
One LogStash installation per server, collecting the required logs and sending them directly to our ElasticSearch
Right now I can see both system works, with both systems having their own advantages and disadvantages. But I was wondering if there's like an "agreed" way of dealing with this problem.
The most common setup is to have one or more central Logstash instances that do the bulk of the work and have more lightweight shippers on the machines that produce the logs. But as you say either way works.
thanks for the reply.
Then I will aim for having a central LogStash instance next to our Kibana and define multiple inputs on our Nginx based on evironment.
Finally got around to installing all the software.
Only struggling now with FileBeat not able to connect to our centralized LogStash.
It's behind HTTPS with an Nginx frontend, and I'm a bit lost on the SSL configuration.
I'm guessing I need a client certificate for the FileBeat box so it accepts the SSL certificate of the LogStash server?
I'm not sure how Filebeat is going to be able to connect to Logstash behind an HTTPS frontend. Filebeat supports posting ES via HTTP(S) but the Beats protocol isn't HTTP-based.
Yeah I just discovered that....
Finished building nginx 1.9 from source now to act as a TCP proxy so it can direct all the data from the filebeat instances to our LogStash.
Starting to think that running localozed LogStash instances that connect to ES might have been easier
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.