My goal is to send all of the logs from all the remote servers to a S3 bucket. I decided to go with Logstash S3 output plugin since we're already running a ELK setup. So each remote server is sending the log files to the master ELK instance. However, I edited the filebeat.conf to include loadbalance and ship logs to it's local logstash. Just so logstash will see all of the logs and then the S3 Output plugin will transfer those logs to the S3 bucket.
However, I'm unable to see any logs on the bucket. Please advise.
filebeat.conf:
output.logstash:
# The Logstash hosts
hosts: ["xx.xx.xx.xx:5044", "localhost:5045"]
loadbalace: true
index: test
load balancing in filebeat is not what you want. I would expect half the events to go to the local logstash and half to the remote logstash.
If you want to write them to s3 locally, then just write to the local logstash, have that write them to s3 and also forward to the master logstash using http, or tcp, or any one of a number of other input / output pairs.
If I want to go with the suggestion you just provided, how do I go by doing that? I'll remove the loadbalance from filebeat.conf but not sure on what else needs to be done.
In your filebeat output, remove "xx.xx.xx.xx:5044", so that you write everything to localhost. In the logstash instance running on localhost, write to s3, and add a lumberjack output that sends events to "xx.xx.xx.xx:5500". On the remote server configure a lumberjack input that listens on port 5500
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.